Governance and Regulation

Node in Governance and Regulation Annual Report

Feb 10, 2022
11:54
Foreword from Professor Subramanian Ramamoorthy, Node PI

 

How can we trust autonomous computer-based systems? By autonomous we mean “independent and having the power to make your own decisions”. We tackle the issue of trusting autonomous systems (AS) by building: experience of regulatory structure and practice, notions of cause, responsibility and liability, and tools to create evidence of trustworthiness into modern development practice.

Modern development practice includes continuous integration and continuous delivery. These practices allow continuous gathering of operational experience, its amplification through the use of simulators, and the folding of that experience into development decisions. This, combined with notions of anticipatory regulation and incremental trust-building form the basis for new practice in the development of autonomous systems where regulation, systems, and evidence of dependable behaviour co-evolve incrementally to support our trust in systems.

Our Research Node brings together a diverse multidisciplinary team from Edinburgh, Heriot-Watt, Glasgow, KCL, Nottingham and Sussex, involving computer science and AI specialists, legal scholars, AI ethicists, as well as experts in science and technology studies and design ethnography. Together, we present a novel software engineering and governance methodology that includes:

1) New frameworks that help bridge gaps between legal and ethical principles (including emerging questions around privacy, fairness, accountability and transparency) and an autonomous systems design process that entails rapid iterations driven by emerging technologies (including, e.g. machine learning in-the-loop decision-making systems).

2) New tools for an ecosystem of regulators, developers and trusted third parties to address not only functionality or correctness (the focus of many other Nodes) but also questions of how systems fail, and how one can manage evidence associated with this to facilitate better governance.

3) Evidence-based from full-cycle case studies of taking AS through regulatory processes, as experienced by our partners, to facilitate policy discussion regarding reflexive regulation practices.

Our work is grounded in concrete problems arising within our wide network of partner organisations, who are working at the forefront of mobility, health and social care, and a variety of other application domains that impact our daily lives – raising the need for a multi-faceted understanding of trustworthiness. Therefore our methodology is iterative, interleaving basic methods development within the node with engagement in the form of case studies to guide the Node’s activities towards questions that are not only scientifically interesting but also societally relevant.

Beginning such an initiative in the middle of a global pandemic, with all of the uncertainties implied by public health measures, has made reaching outwards particularly hard. Not only has this restricted engagement with the public, but also the pressure on frontline staff such as in health and social care has set limits on engagement with the professional community.
At the same time, the various conversations around formal regulation of AI and mechanisms for informal governance within organisations have now become even more timely.

Governments representing all major economies in the world today are deliberating mechanisms for governance and regulation of technology, some of which speak directly to autonomous systems and the specific issues of relevance to the TAS programme. We are excited to contribute to this conversation, developing a methodology for better governance that allows society to benefit from the potential envisioned by the technological optimists, without compromising the broader social good.

 

Read the full report