好色先生TV

好色先生TV

Cognitive AI: Behavioral Models of Decision Making

By Xiaohong Cai

& Coty Gonzalez

& Meera Ray

& Christopher Dancy

& Norman Gottron

AI-SDM is advancing a new frontier, Cognitive AI, aiming at using algorithms to simulate human decision making, to find out how humans make choices in complex environments, how to augment human decisions and how to create synergies between human and AI in hybrid teams.

Importantly, Cognitive AI is grounded in the psychological and cognitive mechanisms of the human mind needed for dynamic decision-making contexts (involving dynamics, uncertainty, and social tradeoffs).  The Institute is creating AI systems that do not merely automate tasks but are human-centered, aiming at augmenting human decisions and creating synergy with humans in hybrid human-AI teams that collaborate in high-stakes environments like disaster recovery and public health. 

The figure below illustrates the conceptual evolution of AI’s role in decision-making, moving from a mere tool to a cognitively grounded partner. In the first stage (AI-as-Assistant), data-driven systems operate largely as advisors, leaving the human with the heavy cognitive burden of interpreting recommendations while maintaining mental models of both the task and the AI's limitations. The shift toward Human-AI Complementarity begins when we integrate Cognitive AI—models that emulate human reasoning and memory. By creating a dynamic representation of the human’s mental state, the AI can personalize its support, adjusting choice architectures to align with human thought processes in real-time. Ultimately, this framework culminates in team collaboration, where Cognitive AI acts as an autonomous teammate. In this vision, complementarity is achieved because the AI does not just process data; it understands shared goals and individual expertise, allowing humans and AI to operate at comparable levels of autonomy to arrive at better decisions than either could achieve alone.

Types of human-AI complementaritya, AI-as-Assistant: Data-driven AI provides recommendations to human decision-makers. b, Human-AI Complementarity: Cognitive AI uses knowledge-tracing to model human behavior, allowing the system to calibrate advice and choice architecture. c, Team Collaboration: Cognitive AI acts as an autonomous teammate, modeling both individual and collective dynamics to optimize support for the entire team.

A central challenge in societal decision-making remains the "alignment gap" between black-box ML algorithms and the nuanced, experience-based reasoning of human experts. AI-SDM is bridging this gap by integrating Instance-Based Learning (IBL), a form of Cognitive AI, with traditional ML frameworks. In public health resource allocation, for instance, the Institute has explored enhancing Restless Multi-Armed Bandits (RMAB) with IBL models. By simulating the "behavioral trajectories" of beneficiaries—accounting for decaying memory and differential attention—these hybrid models aim to predict engagement more accurately than standard LSTMs, ensuring limited health interventions reach those most likely to benefit.

To support decision-making in the chaotic aftermath of natural disasters, AI agents must understand not only the physical environment but also the social structures that mediate recovery. AI-SDM researchers have achieved significant breakthroughs in scaling the ACT-R cognitive architecture through a new integrated declarative memory model. This allows researchers to ingest large-scale domain knowledge into a cognitive framework, enabling the simulation of socially mediated decisions where an agent’s actions are informed by a realistic knowledge base of social and geographical factors. The extended ACT-R cognitive theory may help us understand how the physiological impact of stress impacts decisions on AI use, as it likely would for humans using AI tools in a disaster scenario.   Furthermore, true complementarity requires an AI to "think about what the human is thinking." AI-SDM has successfully implemented a Computational Theory of Mind (ToM) framework using higher-order IBL models. These agents can infer a partner’s hidden preferences and "mode" during collaborative tasks, with recent simulations demonstrating that human-AI pairs achieve superior coordination in complex "victim rescue" scenarios when the AI agent is equipped with these ToM capabilities.

Finally, to accelerate field-wide progress, the Institute has released two major open-source libraries: , a multi-agent extension of Minigrid for building complex simulation environments, and , a framework that translates these simulations into browser-based human-AI experiments. These tools allow researchers to rapidly deploy experiments and collect the empirical data necessary to refine the next generation of cognitively informed AI.

</