TRIADS-funded group studies how AI changes human behavior

WashU faculty Chien-Ju Ho and Wouter Kool are fascinated by the interplay between humans and the AI algorithms that our decisions help to train. They've published their first study exploring how people modify their behavior when knowingly interacting with artificial intelligence.

Chien-Ju Ho is a longtime fan of Isaac Asimov, the science fiction author whose speculative works grappled with the messy intersections between humans and machines. 

Decades before “AI” became a popular buzzword, Asimov imagined a future where sentient artificial intelligence could call into question every tenet of human nature. Inspired in part by those stories, Ho has focused much of his research on the “human factor” that undergirds systems like artificial intelligence.

Chien-Ju Ho

“In the machine learning field, we often treat humans as static data sources, and assume they are rational, at least according to some theoretical notions,” said Ho, an assistant professor in the McKelvey School of Engineering. “Since pursuing my PhD, I have been interested in relaxing those assumptions, because humans are really not like that.”

By forming a research team with colleague Wouter Kool to study human-AI interactions, Ho has found a collaborator who shares that mindset.

“I’ve known that people are not particularly rational for a very long time,” quipped Kool, an assistant professor in the Department of Psychological & Brain Sciences.

In spring 2023, Ho and Kool were announced as one of 10 teams to win the first-ever seed grants from the Transdisciplinary Institute in Applied Data Sciences (TRIADS). Their project, entitled “Accounting for Human Bias to Improve AI-Assisted Decision Making,” aims to study the human inputs that inform AI training. The two were initially linked by Lauren Treiman, a graduate student in Kool’s lab who had also worked with Ho in the past.

Wouter Kool

“When Lauren and C.J. started talking to me about this project, in which we essentially try to understand how humans change their behavior when they know they’re interacting with an AI algorithm, I got quickly interested,” Kool said. “In general, I ask questions about how people make decisions. This seemed like a really nice, applied way to study questions like that.”

For their first foray into studying human-AI interactions, the trio of Ho, Kool, and Treiman opted for a clean, classic test of human motivation: the ultimatum game. In the ultimatum game, two players are asked to divide a sum of money. One, the proposer, decides how the money will be split. The other player, the responder, decides whether to accept or reject the proposer’s offer. If the responder rejects the offer, both players receive nothing.

From a purely rational standpoint, a responder should accept any offer – after all, leaving with even a small sliver of the pot is better than leaving with nothing. But stubborn notions of fairness often drive responders to reject offers where the money is unevenly split in the proposer’s favor.

The WashU trio’s study, with Treiman as the primary researcher, introduced a new wrinkle: would ultimatum game players respond differently if they knew that their actions would help to train an AI algorithm? Their study divided 213 participants into two groups: one group was told beforehand that their ultimatum game responses would help to train AI, the other (the control group) was not.

The results, published as part of the 2023 AAAI Conference on Human Computation and Crowdsourcing, hit upon intriguing findings – across multiple iterations of the ultimatum game, participants who believed they were training AI rejected unfair offers at a higher rate than the control group, even if doing so presented no immediate or future benefit to them. 

In the first experiment of the study, participants in the AI group were told that the AI model they were training would play them in a future ultimatum game session. This group rejected more unfair offers than the control group, forgoing immediate reward to presumably gain better AI offers in a subsequent session.

“They think about the consequences of their actions, and they realize, ‘Oh, it’s worth it to give up a little bit of money now, so that in the next session, I can cash in,’” Kool said. “That’s not super surprising. There are many studies that show that people are willing to incur small costs now to reap benefits later.”

But subsequent iterations of the study pushed the envelope further. What if the AI trainers were told that they wouldn’t encounter their AI partner again, but another study participant would? The results held steady – again, the AI group rejected unfair monetary splits at a higher rate than the control group. What if the AI trainers were told that no one in the study would encounter their AI algorithm after they trained it? Once more, the results were remarkably similar.

That consistency of findings led to a very straightforward, declarative paper title: “Humans Forgo Reward to Instill Fairness into AI.” 

Ho and Kool believe that the outcomes of these simple games have larger implications for AI developers. They are currently preparing future studies to attack their project’s core questions from different angles.

“Getting a better understanding of how humans will respond to AI better informs us on how AI should be designed,” Ho said.

“I believe that we’ve struck a nerve, because no one has really asked what AI does to humans,” Kool said. “All the focus is on the complexity of the AI, and not on how we have started to change our behavior when interacting with AI.”

Learn more about the TRIADS Seed Grant Program.