From Artificial Intelligence (AI) to Intelligence Augmentation (IA): Design Principles, Potential Risks, and Emerging Issues
Lina Zhou University of North Carolina at Charlotte, lzhou8@uncc.edu Cynthia Rudin Duke University, cynthia@cs.duke.edu Matthew Gombolay Georgia Institute of Technology, matthew.gombolay@cc.gatech.edu Jim Spohrer ISSIP.org, spohrer@gmail.com Michelle Zhou Juji, mzhou@juji-inc.com
This work examines the evolution from AI, which aims to make machines independent, towards IA, which focuses on enhancing human abilities with AI as a tool. The authors emphasize four key principles for designing good IA systems:
Simplicity: IA tools should be easy to use, even for people without technical expertise.
Interpretability: Users should be able to understand how the IA system works and its reasoning process.
Human-centeredness: IA should be designed with the user's needs and goals in mind.
Ethics: IA development and use should adhere to ethical considerations and avoid harmful consequences.
The authors also propose a broader IA architecture that goes beyond just human-machine interaction. It includes how data and specific domain knowledge interact and influence the system. They argue that this broader view helps in achieving the principle of simplicity.
How it relates to our work: