好色先生TV

好色先生TV

AI-SDM Research

AI-SDM researchers are working to enhance our understanding of human decision-making. Leveraging these insights, we are advancing AI techniques for autonomous decision-making while considering human factors that govern the acceptance of decisions. The foundational advancements we're pursuing in decision science and AI not only inform deployments of technology but are driven by the needs identified by use-case stakeholders in critical societal domains. 

This cyclical synergy ensures that our technical breakthroughs remain deeply grounded in the complexities of human behavior and real-world necessity. Central to this mission is the concept of human-AI complementarity—the idea that the most effective societal decisions emerge when AI and humans work together to leverage their unique strengths. Because this synergy drives our research, we host an Annual Workshop on Human-AI Complementarity for Decision-Making to define the state of the art in this critical field.

Foundational Thrusts

Computational Representation of Human Decision Processes

Model human reasoning and cognitive processes to identify strengths and biases, creating a foundation for AI that aligns with humans.

Leads: 
Cleotilde Gonzalez, Research Professor, CMU's Department of Social and Decision Sciences
, Associate Professor, Penn State College of Engineering

Multi-Objective and Multi-Agent Autonomous Decision Support

Advance AI for decision making in the face of uncertainty, dynamic circumstances, multiple competing criteria, and polycentric coordination.

Leads: 
, Research Professor, CMU's Robotics Institute
, Associate Professor, University of Washington's School of Computer Science & Engineering

Robust Aggregation for Collective Decision Making

Generate forecasts and uncertainties from hybrid human-AI teams to enhance collective decision making and foster deeper human-AI complementarity.

Leads: 
, Professor, Harvard's School of Engineering and Applied Sciences
Aaditya Ramdas, Associate Professor, CMU's Department of Statistics & Data Science and the Machine Learning Department

Causal and Counterfactual Reasoning

Develop explainable models that provide mechanistic understanding of decisions and enable decision makers to evaluate past actions and choices not taken.

Leads: 
Kun Zhang, Professor, CMU's Department of Philosophy
Peter Spirtes, Professor & Department Head, CMU's Department of Philosophy

Use-Case Deployments

Public Health

Allocate public health resources and provide personalized interventions using AI-driven insights to support communities under evolving demands and resource constraints

Leads: 
, Assistant Professor, Computational Health Informatics Program, Harvard Medical School and Boston Children's Hospital
, Assistant Professor, CMU's Machine Learning Department

Disaster Management

Deploy AI to assist emergency managers with rapid damage assessment, optimal resource routing, and targeted communications during natural disasters.

Leads:
, Professor Emerita, Texas A&M's Department of Computer Science & Engineering
, Professor, Howard University's Department of Sociology & Criminology

Adoption of AI Decision Support

Develop understanding of what societal factors facilitate the adoption of AI-enabled decisions and how different use patterns can navigate their influence.

Lead: 
, Research Scientisti, Information Technology Division, MITRE Corporation

Integration

Dynamic and Resilient Resource Allocation

We leverage AI to solve complex resource allocation challenges in disaster management and public health, ensuring resilient and efficient response in high-uncertainty environments.

Goals

  • Understand decision-making practices of policymakers and officials in high-pressure environments
  • Identify specific opportunities where AI can effectively support social decision judgment
  • Develop human-centered AI systems that support decision makers

This effort bridges computational cognitive modeling with multi-agent reinforcement learning to create decision-support tools that balance speed, cost, and comprehensive coverage while identifying the factors that ensure effective resource distribution.

Timely Personalized Behavioral Interventions

We aim to investigate how AI can help nudge public behavior response to disasters and public health issues, with particular focus on disaster risk communication and maternal health.

Goals

  • Understand decision-making practices of individuals in high-pressure environments
  • Identify specific opportunities where AI can effectively influence human behavior
  • Develop human-centered AI systems that support targeted interventions

This effort merges decision science with AI-enabled communication to translate complex data into actionable, personalized guidance. We focus on the heterogeneity of human response, ensuring improved adherence to recommendations that improve societal outcomes.

Trust and Ethics Framework

We are dedicated to ensuring the responsible and successful transition of AI-SDM's innovations from the lab into real-world operational environments by establishing ethical governance and stakeholder trust.

Goals

  • Investigate sociological and organizational barriers to AI implementation

  • Establish formal guidelines for explainability, trust, and accountability

  • Create prescriptive mechanisms to facilitate sustained adoption by government and NGO partners

This effort utilizes counterfactual modeling to assess AI-assisted versus unaided decision-making. It identifies the societal and organizational factors that govern acceptance, creating the formal standards necessary to ensure AI systems are ethically positioned and successfully sustained across high-stakes domains.

</