*LLMs Forget Instructions: A Familiar Pattern*

Research published in 2023 and 2025 reveals that Large Language Models (LLMs) exhibit a peculiar behavior when dealing with multi-turn conversations. Their performance drops significantly when critical information is presented in the middle of the conversation, mirroring the cognitive patterns observed in individuals with Attention Deficit Hyperactivity Disorder (ADHD).

*The "Lost in the Middle" Phenomenon*

The study "Lost in the Middle" (Stanford 2023) demonstrated that LLMs experience a 30%+ performance drop when crucial information is embedded within the middle of the context window. Accuracy is high at the beginning and end of the conversation, but declines sharply in the middle. This pattern is strikingly similar to the concept of working memory overflow, which is a common challenge faced by individuals with ADHD.

*Context Drift and Executive Dysfunction*

The research paper "LLMs Get Lost in Multi-Turn Conversation" (Laban et al. 2025) further explored the issue of context drift during multi-step reasoning. The study found that instructions from earlier turns are gradually diluted by later content, leading to a significant decrease in recall. The more turns in the conversation, the worse the recall. This phenomenon is reminiscent of the executive dysfunction observed in ADHD brains, which struggle to maintain control over long sequences of information.

*The Intense World Theory and Local Connectivity*

Interestingly, the dense local connectivity in transformer attention, which underlies LLMs, has been likened to the "intense world" theory of neurodivergent processing. Both produce strong pattern recognition capabilities, but also exhibit weak executive control over long sequences. This parallel highlights the importance of understanding the cognitive mechanisms underlying LLMs and their limitations.

*Fixing the Problem: Echo of Prompt and Scaffolding Techniques*

To mitigate the issue of context drift and forgotten instructions, researchers and developers have proposed several solutions. The "Echo of Prompt" technique involves re-injecting instructions before execution, much like re-reading the question before answering. Task decomposition into smaller, manageable steps also helps to reduce overwhelm. External verification can prevent self-reported false completion.

*A Call for Collaboration and Knowledge-Sharing*

The author of the original post, ColdPlankton9273, invites the community to share their experiences and techniques for addressing the issue of forgotten instructions in agentic builds. What scaffolding techniques are others using to support long-running workflows? By sharing knowledge and expertise, we can collectively develop more effective solutions to this problem and push the boundaries of what is possible with LLMs.