November 10, 2021
Robots and AIs are increasingly involved in every facet of human life: transportation, warfare, criminal justice, medicine and, of course, social media and communication. And while in some cases they are promised to be better: fairer, more data driven, less prone to emotion, they often mirror back to us our own moral blindspots and biases. Perhaps we need to design AI systems that are not only good at their jobs but are, in a sense, good people – good moral agents. How should we go about doing that? Do we want AI that learn from and mimic our messy moral reasoning? Or, do we want AI to be morally better than us? What would that look like, and would we ever be able to take moral guidance from a robot? This CHI salon brings an interdisciplinary set of scholars together to examine the practical and theoretical questions raised by the goal of pursuing more ethical AI. Laura Sizer (Hampshire and MHC philosophy), Lee Spector (Amherst computer science), Joseph Moore (Amherst philosophy), Heather Pon-Barry (MHC computer science), and Philip Thomas (UMass computer science).