Responsible AI for Long-term Trustworthy Autonomous Systems (RAILS)

RAILS will explore independent long-term autonomy systems in different applications. These will include i) autonomous vehicles and ii) autonomous robot systems such as unmanned aerial vehicles (drones).    
Lead Investigator: Lars Kunze, Departmental Lecturer in Robotics, University of Oxford

Making Systems Answer

Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps, by boosting the ability of systems to deliver a vital component of responsibility, namely answerability.
Lead Investigator: Professor Shannon Vallor, Director for Technomoral Futures, Edinburgh Futures Institute

Assuring Responsibility for Trustworthy Autonomous Systems (AR-TAS)

The research project is an interdisciplinary programme of work – drawing on the disciplines of engineering, law, and philosophy – that culminates in a methodology to achieve precisely that tracing and allocation of responsibility.
Lead Investigator: Dr Ibrahim Habli, Reader in Safety-Critical Systems, University of York

Computational Agent Responsibility

In this multi-disciplinary project, we aim to devise a framework for autonomous systems responsibility that is philosophically justifiable, effectively implementable, and practically verifiable.
Lead Investigator, Michael Fisher, Holds a RAEng Chair in Emerging Technologies, University of Manchester

Twitter feed