A new way to teach AI cultural values
Research explores how observing human decision making across cultural contexts could shape more adaptable and culturally attuned AI systems.

Artificial intelligence systems use code to optimize efficiency, speed and accuracy. Human decision-making, however, is shaped by values that emerge from social, cultural and lived experiences.
A study conducted in collaboration with ΒιΆΉ΄«Γ½Σ³» explores whether AI systems can learn human values by observing how people make decisions, rather than relying on a single, universal set of programmed rules.
Most AI systems today are trained using reinforcement learning, a method that teaches machines what to prioritize by rewarding specific outcomes tied to predefined goals. While effective for narrow tasks, this approach often assumes values are static and universal. As a result, AI systems may struggle to reflect the diverse ways people weigh tradeoffs, cooperate or act altruistically across different cultural contexts.
New research, led by the University of Washington (UW) and co-authored by Rodolfo Cortes Barragan, assistant professor of psychology at ΒιΆΉ΄«Γ½Σ³» Imperial Valley, tested an alternative training approach known as inverse reinforcement learning. Instead of prescribing what an AI system should value, inverse reinforcement learning allows AI to infer underlying values by observing human behavior. Their work was published in December in the journal .
To examine whether this method could capture culturally shaped values, researchers focused on altruism. Participants from two self-identified cultural groups, Latinx and White, took part in an online cooperative game that required real-time decisions about whether to help another player, sometimes at a personal cost. The researchers found measurable differences in how altruism was expressed across the two groups, with participants in the Latinx group demonstrating higher levels of altruistic behavior in the game.
Screenshot of the Overcook online game used to test altruistic behaviors. Players need to deliver as much onion soup as possible with an option to help the other player who has a disadvantage in having to travel further.
The researchers then used the data from different human cultural groups to train AI agents, and found the AI agents learned distinct patterns that reflected human group-level differences.
When the AI systems were later tested in a new scenario involving charitable giving, the AI applied what they had learned from the original task. This ability to generalize learned values beyond a single setting suggests that AI systems can learn decision-making tendencies shaped by culture and carry them into next contexts.
Flexible approach
The findings point toward a pathway for developing more adaptable and human-centered AI systems. Rather than categorizing individuals or reinforcing stereotypes, the researchers emphasize that culturally attuned AI should be designed to recognize context, remain flexible and operate within strong ethical safeguards.
βAt the very least, AI development needs to be mindful and sensitive to cultural sources and differences in human values,β Cortes Barragan said. βAttention to cultural information is key to reducing friction and achieving more fluid interactions, which can lead to more positive outcomes for people using these systems.β
As AI technologies become increasingly integrated into our daily lives, understanding how values differ across cultural settings is essential. Systems that fail to account for these differences risk misunderstanding human needs or producing unintended harm.
βIt is imperative that we examine the potential risks of AI and ensure this technology remains anchored to human values so people can have safer and more positive experiences,β Cortes Barragan said.
By showing that AI systems can learn culturally influenced values through observation rather than explicit instruction, the research contributes to ongoing efforts to design technology that can better meet the needs of humans. While the study does not claimAI can fully replicate human moral reasoning, it demonstrates a proof of concept for how values can be learned indirectly through behavior.
Additional work is needed to explore how AI systems can responsibly learn a broader range of values and operate in more complex social environments, the researchers said. As AI becomes more embedded in health care, education and public services, understanding how values vary across contexts will be critical to building technology that serves people with dignity and care.
Researcher notes:
, a UW research engineer in the Allen School, and , a software engineer at Microsoft who completed this research as a UW student, were co-lead authors. Other co-authors include , a scientist at the Allen Institute who completed this research as a UW doctoral student; , an assistant professor at ΒιΆΉ΄«Γ½Σ³», who completed this research as a post-doctoral scholar at UW; Professor at the UW Allen School, Professor , Co-Director of the Institute for Learning & Brain Sciences at UW, and , a Professor in the Allen School and director of the at UW.



