A new study provides evidence that the human brain constructs our seamless experience of the world by first breaking it down into separate predictive models. These distinct models, which forecast different aspects of reality like context, people’s intentions, and potential actions, are then unified in a central hub to create our coherent, ongoing subjective experience. The research was published in the journal Nature Communications.
The scientists behind the new study proposed that our world model is fragmented into at least three core domains. The first is a “State” model, which represents the abstract context or situation we are in. The second is an “Agent” model, which handles our understanding of other people, their beliefs, their goals, and their perspectives. The third is an “Action” model, which predicts the flow of events and possible paths through a situation.
“There’s a long-held tradition, and with good evidence that the mind is composed of many, different modules specialized for distinct computations. This is obvious in perception with modules dedicated to faces and places. This is not obvious in higher-order, more abstract domains which drives our subjective experience. The problem with this is non-trivial. If it does have multiple modules, how can we have our experience seemingly unified?” explained study author Fahd Yazin, a medical doctor who’s currently a doctoral candidate at the University of Edinburgh.
“In learning theories, there are distinct computations needed to form what is called a world model. We need to infer from sensory observations what state we are in (context). For e.g. if you go to a coffee shop, the state is that you’re about to get a coffee. But if you find that the machine is out-of- order, then the current state is you’re not going to get it. Similarly, you need to have a frame of reference (frame) to put these states in. For instance, you want to go to the next shop but your friend had a bad experience there previously, you need to take their perspective (or frame) into account. You possibly had a plan of getting a coffee and chat, but now you’re willing to adapt a new plan (action transitions) of getting a matcha drink instead.”
“You’re able to do all these things in a deceptively simple way because various modules can coordinate their output, or predictions together. And switch between various predictions effortlessly. So, if we disrupt their ongoing predictions in a natural and targeted way, you can get two things. The brain regions dedicated to these predictions, and how they influence our subjective experience.”
To explore this, the research team conducted a series of experiments using functional magnetic resonance imaging, a technique that measures brain activity by detecting changes in blood flow. In the main experiment, a group of 111 young adults watched an eight-minute suspenseful excerpt from an Alfred Hitchcock film, “Bang! You’re Dead!” while inside a scanner. They were given no specific instructions other than to watch the movie, allowing the scientists to observe brain activity during a naturalistic experience.
To understand when participants’ predictions were being challenged and updated, the researchers collected data from separate groups of people who watched the same film online. These participants were asked to press a key whenever their understanding of the movie’s context (State), a character’s beliefs (Agent), or the likely course of events (Action) suddenly changed. By combining the responses from many individuals, the scientists created timelines showing the precise moments when each type of belief was most likely to be updated.
Analyzing the brain scans from the movie-watching group, the scientists found a clear division of labor in the midline prefrontal cortex, a brain area associated with higher-level thought. When the online raters indicated a change in the movie’s context, the ventromedial prefrontal cortex became more active in the scanned participants. When a character’s perspective or intentions became clearer, the anteromedial prefrontal cortex showed more activity. And when the plot took a turn that changed the likely sequence of future events, the dorsomedial prefrontal cortex was engaged.
The researchers also found that these moments of belief updating corresponded to significant shifts in the brain’s underlying neural patterns. Using a computational method called a Hidden Markov Model, they identified moments when the stable patterns of activity in each prefrontal region abruptly transitioned. These neural transitions in the ventromedial prefrontal cortex aligned closely with updates to “State” beliefs.
Similarly, transitions in the anteromedial prefrontal cortex coincided with “Agent” updates, and those in the dorsomedial prefrontal cortex matched “Action” updates. This provides evidence that when our predictions about the world are proven wrong, it triggers not just a momentary spike in activity, but a more sustained shift in the neural processing of that specific brain region.
Having established that predictions are handled by separate modules, the researchers next sought to identify where these fragmented predictions come together. They focused on the precuneus, a region located toward the back of the brain that is known to be a major hub within the default mode network, a large-scale brain network involved in internal thought.
By analyzing the functional connectivity, or the degree to which different brain regions activate in sync, they found that during belief updates, each specialized prefrontal region showed increased communication with the precuneus. This suggests the precuneus acts as an integration center, receiving the updated information from each predictive module.
To further investigate this integration, the team examined the similarity of multivoxel activity patterns between brain regions. They discovered a dynamic process they call “multithreaded integration.” When participants’ beliefs about the movie’s context were being updated, the activity patterns in the precuneus became more similar to the patterns in the “State” region of the prefrontal cortex.
When beliefs about characters were changing, the precuneus’s patterns aligned more with the “Agent” region. This indicates that the precuneus flexibly syncs up with whichever predictive module is most relevant at a given moment, effectively weaving the separate threads of prediction into a single, coherent representation.
The scientists then connected this integration process to subjective experience. Using separate ratings of emotional arousal, a measure of how engaged and immersed viewers were in the film, they found that the activity of the precuneus closely tracked the emotional ups and downs of the movie. The individual prefrontal regions did not show this strong relationship.
What’s more, individuals whose brains showed stronger integration between the prefrontal cortex and the precuneus also had more similar overall brain responses to the movie. This suggests that the way our brain integrates these fragmented predictions directly shapes our shared subjective reality.
“At any given time, multiple predictions may compete or coexist, and our experience can shift depending on which predictions are integrated that best align with reality,” Yazin told PsyPost. “People whose brains make and integrate predictions in similar ways are likely to have more similar experiences, while differences in prediction patterns may explain why individuals perceive the same reality differently. This approach provides new insight into how shared realities and personal differences arise, offering a framework for understanding human cognition.”
To confirm these findings were not specific to one movie or to visual information, the team replicated the key analyses using a different dataset where participants listened to a humorous spoken-word story. They found the same modular system in the prefrontal cortex and the same integrative role for the Precuneus, demonstrating that this is a general mechanism for how the brain models the world, regardless of the sensory input.
“We replicated the main findings across a different cohort, sensory modality and emotional content (stimuli), making these findings robust to idiosyncratic factors,” Yazin said. “These results were observed when people were experiencing stimuli (movie/story) in a completely uninterrupted and uninstructed manner, meaning our experience is continuously rebuilt and adapted into a coherent unified stream despite it originating in a fragmented manner.”
“Our experience is not just a simple passive product of our sensory reality. It is actively driven by our predictions. And these come in different flavors; about our contexts we find ourselves in, about other people and about our plans of the immediate future. Each of these gets updated as the sensory reality agrees (or disagrees) with our predictions. And integrates with that reality to form our ‘current’ experience.”
“We have multiple such predictions internally, and at any given time our experience can toggle between these depending on how the reality fits them,” Yazin explained. “In other words, our original experience is a product of fragmented and distributed predictions integrated into a unified whole. And people with similar way of predicting and integrating, would have similar experiences from the reality than people who are dissimilar.”
“More importantly, it brings the default mode network, a core network in the human brain into the table as a central network driving our core phenomenal experience. It’s widely implicated in learning, inference, imagination, memory recall and in dysfunctions to these. Our results offer a framework to fractionate this network by computations of its core components.”
But as with all research, the study has some limitations. The analysis is correlational, meaning it shows associations between brain activity and belief updates but cannot definitively prove causation. Also, because the researchers used naturalistic stories, the different types of updates were not always completely independent; a single plot twist could sometimes cause a viewer to update their understanding of the context, a character, and the future plot all at once.
Still, the consistency of the findings across two very different naturalistic experiences provides strong support for a new model of human cognition. “Watching a suspenseful movie and listening to a comedic story feels like two very different experience but the fact that they have similar underlying regions with similar specialized processes for generating predictions was counterintuitive,” Yazin told PsyPost. “And that we could observe it in this data was something unexpected.”
Future research will use more controlled, artificially generated stimuli to better isolate the computations happening within each module.
“We’re currently exploring the nature of these computations in more depth,” Yazin said. “In naturalistic stimuli as we’ve used now, it is impossible to fully separate domains (the contributions of people and contexts are intertwined in such settings). It brings richness but you lose experimental control. Similarly, the fact that these prefrontal regions were sensitive regardless of content and sensory information means there is possibly an invariant computation going on within them. We’re currently investigating these using controlled stimuli and probabilistic models to answer these questions.”
“For the last decade or so, there’s been two cultures in cognitive neuroscience,” he added. “One is using highly controlled stimuli, and leveraging stimulus properties to ascertain regional involvement to that function to various degrees. Second is using full-on naturalistic stimuli (movies, narratives, games) to understand how humans experience the world with more ecological accuracy. Each has brought unique and incomparable insights.”
“We feel studies on subjective experience/phenomenal consciousness has focused more on the former because it is easier to control (perceptual features/changes), but there’s a rich tradition and methods in the latter school that may help uncover more intractable problems in novel ways. Episodic ,emory and semantic processing are two great examples of this, where using naturalistic stimuli opened up connections and findings that were completely new to each of those fields.”
The study, “Fragmentation and multithreading of experience in the default-mode network,” was authored by Fahd Yazin, Gargi Majumdar, Neil Bramley, and Paul Hoffman.
First Appeared on
Source link

