Local models are a godsend when it comes to discussing personal matters

📰 Reddit r/LocalLLaMA

I’ve been keeping a personal journal for the past few years. The entire thing is made up of over 100k+ tokens. I noticed that some of the Gemma 4 models support 256k context, so I decided to test the 26B A4B model out by sharing my entire personal journal in the initial prompt and asking for some insights. Obviously, I didn’t simply just say "share your insights, make no mistakes." I am fully aware of the fact that LLMs have the potential to glaze users.

Published 13 Apr 2026
Read full article → ← Back to Reads