Engineered systems are becoming more complex and, increasingly, more autonomous; However, it has become clear that simple ethical principles, such as good/bad or right/wrong, are insufficient to capture high-level autonomous decision-making and that we need stronger concepts of “responsibility” in practice.
In this multi-disciplinary project, we aim to devise a framework for autonomous systems responsibility that is philosophically justifiable, effectively implementable, and practically verifiable.
This paves the way for broader philosophical studies, the formal verification of system responsibility, sophisticated explanations and the use of responsibilities as a driver for agent decisions and actions.