Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation (2023)
​
Guanglu Zhang a,*, Leah Chong a, Kenneth Kotovsky b, Jonathan Cagan a
a Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15213, USA b Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
​
Computers in Human Behavior 139 (2023) 107536
Available online 20 October 2022 0747-5632/© 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Our Summary: This paper investigates the impact of teammate identity ("human" vs. AI) and teammate performance (high-performing vs. low-performing) on human-AI cooperation using a chess puzzle task. The researchers aimed to determine if deceiving humans into believing they are working with another person—rather than an AI—improves trust and joint performance. In a study involving 128 participants, the researchers utilized an experimental design where half the subjects were informed they were working with an AI, while the other half were told their teammate was a human named "Taylor". Behavioral trust was measured by the frequency with which a participant accepted a teammate's move that differed from their initial choice.
The results revealed that humans demonstrate higher behavioral trust in AI teammates than in human teammates, as they accepted the AI’s decisions more frequently when they were not deceived about its identity. This preference is partly attributed to a high expectation of AI expertise in chess, alongside the findings that deception increased perceived temporal demand, making participants feel more rushed.
nterestingly, a discrepancy was found between behavioral and self-reported trust: participants rated low-performing "human" teammates as more competent and helpful than low-performing AI teammates. The sources suggest this self-reporting bias may stem from social pressure to avoid giving negative feedback to a perceived human peer.
While teammate identity primarily influenced trust behavior, teammate performance was the significant factor determining the joint performance of the human-AI team. The study also discovered that the effects of identity and performance vary based on the human's level of expertise. Good human chess players were significantly impacted by the teammate's performance but not its identity, whereas poor human chess players were only significantly affected by the teammate's identity. Poor players were more likely to cooperate with an AI because they expected it to have higher expertise, while they expected a human teammate to share their own low level of skill.
The study cautions against deceiving humans about the identity of AI teammates in cooperative settings. Such deception can reduce a human's willingness to accept AI advice and increase their perceived workload without providing any benefit to the overall team performance.
