CCI Seminar Series, 2025-26

Austin van Loon
Class of 1956 Career Development Assistant Professor/Assistant Professor of Work and Organization Studies
MIT Sloan School of Management
Mixed-Subjects Design Experiments

Wednesday, November 19, 2025, 12:00pm-1:00pm
MIT Building E62, Room 346
Zoom: https://mit.zoom.us/j/98952941448?pwd=VFfir7X9wcC1nbSNbs27W0w7Aug2Sh.1

Social science experiments often face a power problem: the outcomes we care about—what people do, believe, and feel—are expensive to measure at scale. Large language models (LLMs) can cheaply answer the same instruments we field with humans, but treating those “silicon subjects” as interchangeable with genuine human behavior risks being led astray. This talk introduces a pragmatic and principled path forward: mixed-subjects designs (MSDs) that combine a modest sample of human outcomes with plentiful LLM predictions that are pre-registered, calibrated, and debiased against human benchmarks. We show how MSDs preserve the credibility of standard randomized controlled trials while delivering gains in precision and cost that scale with prediction accuracy. I’ll outline a unifying framework for analyzing MSD-RCTs and clarify the identification assumptions social scientists already find familiar. Building on this, I’ll present a family of unbiased estimators that leverage LLM outputs to reduce variance, including (i) “power-tuned” estimators that optimally shrink the calibration term and (ii) “difference-in-predictions” estimators that exploit the fact that LLMs can generate predictions under treatment and control for the same unit, canceling common errors. Finally, I’ll introduce our in-progress R package, mixedsubjects, which will use a small pilot plus budget and cost inputs to recommend both an estimator and an optimal allocation between human observations and LLM predictions. LLMs are not replacements for human subjects. But when used transparently and calibrated carefully, they can be a multiplier—helping us run better-powered, more informative experiments.

Austin van Loon is the Class of 1956 Career Development Assistant Professor and an Assistant Professor of Work and Organization Studies at the MIT Sloan School of Management. He is a standing faculty member of the Organization Studies program, a core faculty member of the Economic Sociology program, and a faculty member of the Managerial Communications group.

Emily Hu, Postdoctoral Associate at MIT Institute for Data Systems and Society
The Task Space: An Integrative Framework for Team Research
Wednesday, November 12, 2025, 4:00pm-5:00pm
MIT Building E62, Room 446
Zoom: https://mit.zoom.us/j/97011320229?pwd=ekb4wCdTQPLaGVOr74ZQBq7glf0t7c.1

Research on teams spans many contexts, but integrating knowledge from heterogeneous sources is challenging because studies typically examine different tasks that cannot be directly compared. Most investigations involve teams working on just one or a handful of tasks, and researchers lack principled ways to quantify how similar or different these tasks are from one another. We address this challenge by introducing the “Task Space,” a multidimensional space in which tasks—and the distances between them—can be represented formally, and use it to create a “Task Map” of 102 crowd-annotated tasks from the published experimental literature. We then demonstrate the Task Space’s utility by performing an integrative experiment that addresses a fundamental question in team research: when do interacting groups outperform individuals? Our experiment samples 20 diverse tasks from the Task Map at three complexity levels and recruits 1,231 participants to work either individually or in groups of three or six (180 experimental conditions). We find striking heterogeneity in group advantage, with groups performing anywhere from three times worse to 60% better than the best individual working alone, depending on the task context. Critically, the Task Space makes this heterogeneity predictable: it significantly outperforms traditional typologies in predicting group advantage on unseen tasks. Our models also reveal theoretically meaningful interactions between task features; for example, group advantage on creative tasks depends on whether the answers are objectively verifiable. We conclude by arguing that the Task Space enables researchers to integrate findings across different experiments, thereby building cumulative knowledge about team performance.

Xinlan Emily Hu is a Postdoctoral Associate at the Institute for Data Systems and Society (IDSS) at MIT and an affiliate of the Social and Ethical Responsibilities of Computing (SERC) Program in the Schwarzman College of Computing. In 2025, Emily completed her Ph.D. in the Department of Operations, Information, and Decisions at the Wharton School, University of Pennsylvania, where she was named a Winkleman Fellow. Emily’s research aims to build a generalizable science of group decision-making, with a particular interest in technology-mediated interactions, such as remote collaboration, human-AI teams, and online groups. Her work combines real-time online experiments, observational data, and machine learning methods. She also builds open-source research tools, including the Team Communication Toolkit (team_comm_tools on PyPI), which won the 2024 IACM Technology Innovation Award. Previously, Emily earned her B.S. in Computer Science (with Honors and a concentration in human-computer interaction) and M.S. in Symbolic Systems at Stanford University.

Michelle Vaccaro, MIT, Advancing AI Negotiation: A Large-Scale Autonomous Negotiation Competition
Wednesday, October 29, 2025, 12:00pm-1:00pm, MIT Building E62, 3rd Floor, Room 346
Zoom: https://mit.zoom.us/j/93426583075

We conducted an International AI Negotiation Competition in which participants designed and refined prompts for AI negotiation agents. We then facilitated over 180,000 negotiations between these agents across multiple scenarios with diverse characteristics and objectives. Our findings revealed that principles from human negotiation theory remain crucial even in AI-AI contexts. Surprisingly, warmth–a traditionally human relationship-building trait–was consistently associated with superior outcomes across all key performance metrics. Dominant agents, meanwhile, were especially effective at claiming value. Our analysis also revealed unique dynamics in AI-AI negotiations not fully explained by existing theory, including AI-specific technical strategies like chain-of-thought reasoning, prompt injection, and strategic concealment. When we applied natural language processing (NLP) methods to the full transcripts of all negotiations we found positivity, gratitude and question-asking (associated with warmth) were strongly associated with reaching deals as well as objective and subjective value, whereas conversation lengths (associated with dominance) were strongly associated with impasses. The results suggest the need to establish a new theory of AI negotiation, which integrates classic negotiation theory with AI-specific negotiation theories to better understand autonomous negotiations and optimize agent performance.

Michelle Vaccaro is a fifth-year PhD candidate in MIT’s Institute for Data, Systems, and Society, where she studies human-AI interaction. Her work focuses on how people and AI should—and should not—work together in organizations. To this end, she draws on diverse methodological approaches that include behavioral experiments, predictive modeling, and research synthesis. Before coming to MIT, Michelle worked at Goldman Sachs in foreign exchange strategy and structuring. She earned her Bachelor’s degree in computer science from Harvard College, where she graduated summa cum laude with highest departmental honors in computer science and received the Thomas T. Hoopes Prize for her undergraduate thesis.