You’re Not Remembering — You’re Generating: The Autoregression Theory of Cognition

What if your memories aren’t stored like files in a cabinet, waiting to be pulled out when needed? What if, instead, your brain is constantly generating your thoughts, perceptions, and even recollections on the fly, piecing them together like a neural algorithm predicting the next word in a sentence? This is the core idea behind the “Autoregression Theory of Cognition,” a bold hypothesis that reimagines how we think, remember, and experience the world. Inspired by computational models like autoregressive neural networks, this theory suggests that cognition is less about retrieving fixed data and more about dynamically generating responses based on patterns, context, and probabilities. Let’s dive into this mind-bending concept and explore what it means for our understanding of the human mind.

What Is Autoregression?

To grasp the Autoregression Theory of Cognition, we first need to understand autoregression in the computational sense. In machine learning, autoregressive models, like those powering language models (e.g., GPT architectures), predict the next element in a sequence based on what came before. For example, given the words “The cat is,” the model calculates probabilities to predict the next word—maybe “on” or “sleeping”—using patterns learned from vast datasets. It doesn’t “store” sentences; it generates them dynamically, guided by context and statistical likelihood.

The Autoregression Theory of Cognition borrows this idea, proposing that the brain operates similarly. Instead of storing memories as static snapshots, it generates them in real time, reconstructing experiences based on patterns encoded in neural networks. This challenges traditional views of memory as a passive retrieval process and suggests our minds are constantly creating our reality, much like a sophisticated algorithm.

Rethinking Memory: From Storage to Generation

Traditional models of memory, rooted in cognitive psychology, describe it as a three-stage process: encoding, storage, and retrieval. You experience an event, your brain encodes it as a memory, stores it like a file, and retrieves it when prompted. But this model has flaws. Memories are notoriously unreliable—details fade, shift, or get embellished over time. Eyewitness accounts often differ, and false memories can feel as vivid as real ones. Why is this?

The autoregression theory offers an answer: your brain doesn’t retrieve memories; it generates them. When you “remember” your last birthday, your brain isn’t pulling a video file from a mental hard drive. Instead, it’s reconstructing the event using fragments of sensory data, emotional cues, and contextual patterns. Like an autoregressive model predicting the next word, your brain predicts what the memory should include based on prior experiences and current context. This explains why memories can feel so real yet be so malleable—your brain is actively creating them, not just replaying them.

Neuroscientist Lisa Feldman Barrett’s work on constructed emotion supports this idea. In her book How Emotions Are Made, she argues that emotions aren’t fixed responses but predictions the brain generates based on past experiences and current stimuli. The autoregression theory extends this to all cognition, suggesting that memory, perception, and even decision-making are predictive processes, not static retrievals.

The Brain as a Predictive Machine

The idea that the brain is a predictive machine isn’t new. The predictive coding framework in neuroscience posits that the brain constantly makes predictions about the world and updates them based on sensory input. For example, when you see a friend’s face, your brain doesn’t passively process the visual data; it predicts what the face should look like based on prior encounters and adjusts if the prediction is off (e.g., noticing they got a haircut). The autoregression theory takes this further, suggesting that all cognitive processes—from memory to problem-solving—are generative, driven by neural networks that operate like autoregressive algorithms.

Consider how you recognize a song. The traditional view might say you compare the melody to a stored template in your brain. The autoregression theory, however, suggests your brain is predicting each note based on the pattern of the previous ones, generating the recognition dynamically. If the song deviates (say, a cover version), your brain adjusts its predictions, sometimes leading to that “wait, this sounds familiar but different” feeling.

This generative process could explain creativity, too. When you brainstorm ideas, your brain isn’t pulling preformed solutions from a vault; it’s generating novel combinations of concepts based on patterns it has learned. This aligns with how autoregressive models create text—producing outputs that feel coherent by building on learned patterns, even if the exact sequence is new.

Evidence from Neuroscience and Psychology

While the Autoregression Theory of Cognition isn’t a formally established framework (as of 2025), it draws on converging evidence from multiple fields:

  • Neuroscience: Studies of the hippocampus, a key region for memory, show it doesn’t store memories like a library but acts as a hub for reconstructing experiences. Research by neuroscientists like György Buzsáki suggests the hippocampus replays patterns during sleep, refining predictive models rather than archiving static memories.

  • Psychology: Memory research, such as Elizabeth Loftus’s work on false memories, shows how easily recollections can be altered by suggestion or context. This supports the idea that memories are generated, not retrieved, as they adapt to new information.

  • Computational Modeling: The success of autoregressive models in AI, like those used in language generation, provides a computational analogy. If artificial neural networks can generate coherent outputs without storing exact copies, perhaps biological neural networks do the same.

  • Cognitive Science: The concept of “schema theory” suggests the brain organizes knowledge into flexible frameworks, not rigid records. When recalling an event, the brain uses these schemas to generate a plausible version of the past, filling in gaps with predictions.

Implications for Everyday Life

If cognition is autoregressive, it reshapes how we understand our minds and behaviors:

  • Memory Reliability: Accepting that memories are generated, not retrieved, could make us more skeptical of our recollections. This has implications for everything from personal relationships (e.g., arguments over “what really happened”) to legal systems (e.g., the fallibility of eyewitness testimony).

  • Learning and Creativity: Education could emphasize pattern recognition and flexibility over rote memorization, encouraging brains to become better at generating novel ideas. Creative pursuits might focus on exposing the brain to diverse inputs to enrich its predictive patterns.

  • Mental Health: Disorders like anxiety or PTSD, where the brain generates distressing predictions, could be reframed as issues of faulty autoregression. Therapies might focus on retraining the brain’s predictive algorithms, aligning with approaches like cognitive behavioral therapy.

  • AI and Human Cognition: The parallels between autoregressive models and human cognition could guide AI development, creating systems that mimic the brain’s generative flexibility. Conversely, studying the brain could inspire more efficient AI architectures.

Challenges and Open Questions

The Autoregression Theory of Cognition is compelling but speculative, and several questions remain:

  • Storage vs. Generation: Does the brain store any fixed representations, or is everything generated? Some argue that sensory data must be encoded somewhere to enable pattern-based generation.

  • Neural Mechanisms: How do neural networks in the brain implement autoregressive-like processes? While predictive coding offers clues, we lack a detailed map of how the brain generates complex thoughts.

  • Individual Differences: If cognition is generative, why do some people have better memories or creativity than others? Are their “algorithms” more efficient, or do they have richer training data (i.e., experiences)?

  • Testing the Theory: How can we empirically test whether cognition is autoregressive? Experiments comparing predictive vs. retrieval-based models of memory could help, but designing them is complex.

A New Lens on the Mind

The Autoregression Theory of Cognition invites us to see our minds not as warehouses of memories but as dynamic engines of creation. Every thought, memory, or decision is a new output, generated from the patterns of our experiences and shaped by the context of the moment. It’s a humbling and exhilarating idea: you’re not just remembering your life—you’re actively generating it, one prediction at a time.

This theory, while still a hypothesis, bridges insights from neuroscience, psychology, and artificial intelligence, offering a fresh perspective on what it means to think and be human. As research advances, we may find that our brains and AI systems have more in common than we ever imagined, both weaving stories from the threads of probability.

What do you think—does the idea of a generative mind resonate with you? Share your thoughts or experiences in the comments, and let’s keep exploring the mysteries of cognition!