As computing systems become increasingly autonomous–able to independently pilot vehicles, detect fraudulent banking transactions, or read and diagnose our medical scans—we face a growing problem for social trust in technical systems, known as responsibility gaps. Responsibility gaps arise when we struggle to assign moral responsibility for an action with high moral stakes, either because we don’t know who is responsible or because the agent that performed the act doesn’t meet other conditions for being responsible. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust.
Autonomous systems create new responsibility gaps. They operate in morally high-stakes areas such as health and finance, but software systems aren’t morally responsible agents, and their outputs may not be fully understandable or predictable by the humans overseeing them. To make such systems trustworthy, we need to find a way of bridging these gaps.
Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps, by boosting the ability of systems to deliver a vital component of responsibility, namely answerability. Responsible agents answer for their actions in many ways; we can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Importantly, the very act of answering for our actions often improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps.
Our ambition is to provide theoretical and empirical evidence and computational techniques that can gradually expand the capabilities of autonomous systems (which as “sociotechnical systems” encompass developers, owners, users, etc) to supply the kinds of answers that people rightly seek from trustworthy agents.
Bridging Responsibility Gaps by Making Autonomous Systems Answerable