top of page

Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation (2023)

​

Guanglu Zhang a,*, Leah Chong a, Kenneth Kotovsky b, Jonathan Cagan a

a Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15213, USA b Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA

​

Computers in Human Behavior 139 (2023) 107536

Available online 20 October 2022 0747-5632/© 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Our Summary: This paper investigates the impact of teammate identity ("human" vs. AI) and teammate performance (high-performing vs. low-performing) on human-AI cooperation using a chess puzzle task. The researchers aimed to determine if deceiving humans into believing they are working with another person—rather than an AI—improves trust and joint performance. In a study involving 128 participants, the researchers utilized an experimental design where half the subjects were informed they were working with an AI, while the other half were told their teammate was a human named "Taylor". Behavioral trust was measured by the frequency with which a participant accepted a teammate's move that differed from their initial choice.
 

The results revealed that humans demonstrate higher behavioral trust in AI teammates than in human teammates, as they accepted the AI’s decisions more frequently when they were not deceived about its identity. This preference is partly attributed to a high expectation of AI expertise in chess, alongside the findings that deception increased perceived temporal demand, making participants feel more rushed.

nterestingly, a discrepancy was found between behavioral and self-reported trust: participants rated low-performing "human" teammates as more competent and helpful than low-performing AI teammates. The sources suggest this self-reporting bias may stem from social pressure to avoid giving negative feedback to a perceived human peer.

 

While teammate identity primarily influenced trust behavior, teammate performance was the significant factor determining the joint performance of the human-AI team. The study also discovered that the effects of identity and performance vary based on the human's level of expertise. Good human chess players were significantly impacted by the teammate's performance but not its identity, whereas poor human chess players were only significantly affected by the teammate's identity. Poor players were more likely to cooperate with an AI because they expected it to have higher expertise, while they expected a human teammate to share their own low level of skill.
 

The study cautions against deceiving humans about the identity of AI teammates in cooperative settings. Such deception can reduce a human's willingness to accept AI advice and increase their perceived workload without providing any benefit to the overall team performance.

How it relates to our work:
​​
While the study uses a structured chess puzzle task to measure behavioral trust, our work explores these same themes—agency, identity, and the illusion of control—through immersive, narrative-driven art installations. Headspace and Tokens of Decency investigate how a participant's awareness of an AI’s identity influences their willingness to engage with or accept the "machine's" guidance.

The study’s investigation into deception is particularly relevant to the"Detective" interrogation scenarios. The researchers found that deceiving humans into believing they are working with another person actually reduces behavioral trust and increases perceived "temporal demand" (feeling rushed).

It was ethically important for the design of our installations and their actual meaning that the AI self-discloses their nature in order to establish "trust" but also, maybe, invite the participants to "trip the AI" or feel a sense of rebellion when they feel the system is testing them. 

It could be interesting to study the impact of authority figures roles when taken by AI under the cover of high standard if ethics (Cf. our "AI police index", or the lab experiment design set up of Tokens of Decency). Could it generate an ambivalent feeling of trust and adversarial behavior?

 

© 2023 by Space Machina

bottom of page