A recent study from the University of California, Berkeley, explores why some people are better at interpreting others’ emotions than others. The research, published in Nature Communications on December 16, 2025, investigates how individuals use facial and contextual cues to assess emotional states.
The study found that most people’s brains weigh information from faces and background context differently depending on which is clearer. For example, if a person’s facial expression is clear but the context is ambiguous, people tend to rely more on the face. If the facial expression is unclear but the context provides strong clues, they lean more on the background details.
Lead author Jefferson Ortega explained this process: “We don’t know exactly why these differences occur,” said Ortega, a psychology Ph.D. student. “But the idea is that some people might use this more simplistic integration strategy because it’s less cognitively demanding, or it could also be due to underlying cognitive deficits.”
Ortega and his team conducted experiments with 944 participants who watched videos where either the background or faces were blurred. This allowed researchers to see how much weight participants gave to each type of cue when making judgments about a person’s mood.
Using participant assessments from different conditions, Ortega used a model to predict how they would rate scenes when all details were visible—what he called the “ground truth.” He wanted to determine whether people combined cues based on their ambiguity using a method known as Bayesian integration.
The results showed that about 70% of participants adjusted their judgments by weighing ambiguities in faces and contexts before making an assessment. However, roughly 30% used simpler strategies that averaged both cues rather than prioritizing one over another based on clarity.
“It was very surprising,” Ortega said. “The computational mechanisms — the algorithm that the brain uses to do that — is not well understood. That’s where the motivation came for this paper. It’s just an amazing feat.”
David Whitney, professor of psychology at UC Berkeley and co-author of the study, commented: “Some observers are very good at integrating context and facial expressions to understand emotions. And some folks are not so good at it.”
This work builds on previous research from Whitney’s lab showing that people can infer emotions even when characters are blurred out in scenes by relying on contextual clues.
Ortega noted that these findings could have implications for understanding traits associated with autism since individuals with such traits may process emotional information differently: “This work sets the foundation for investigating that in the future,” Ortega said.



