Why LLMs Forget Your Instructions — And Why It Looks Exactly Like ADHD
Why LLMs Forget Your Instructions — And Why It Looks Exactly Like ADHD TL;DR A Reddit discussion in r/artificial is getting traction around a fascinating parallel: large language models forget instructions the same way ADHD brains do, and there’s actual research explaining why. The “Lost in the Middle” problem — where AI assistants like Claude drop earlier instructions during long sessions — isn’t a random glitch, it’s a structural feature of how these models process information. Understanding the neuroscience and ML research behind this could change how you prompt, how you build, and how you think about AI reliability. Tools like Agently are already trying to solve this at the enterprise level. ...