Typically, to test whether an autonomous system (AS) is trusted, researchers develop an appropriate usecase often as a one-off, never to be used again.  For this project, we are bringing such experimental usecase scenarios together in an online library where researchers, regulators, policy-makers, industry professionals and interested members of the general public can browse, share, and critique them, classified in terms of the aspect of trust they test for, the way trust is measured, the type of risk involved, and a range of other factors.

During this project’s first phase, we analysed use cases, consulted with stakeholders, conducted a survey, and developed a taxonomy. Preliminary work was documented in short papers for an HRI workshop and the first TAS Symposium. Our journal article, “Use cases for the Evaluation of Trustworthy AI: Taxonomy and Review”, has now been submitted for publication and is currently under review.

As a legacy, not only for this project but for the TAS programme as a whole, we are currently engaged in completion of the library’s development and curation of its initial collection.

You can read about their previous project ‘TAS Benchmarks Library and Critical Review’ here.

Our Team

Peta Masters

Research Associate, King’s College London

Lead Contact

Yang Lu

Senior Lecturer in Computer Science, York St John University

Co-Investigator

Sachini Weerawardhana

Research Associate, King’s College London

Co-Investigator

Liz Dowthwaite

Senior Research Fellow, Horizon Digital Economy Research, University of Nottingham

Co-Investigator

Paul Luff

Professor of Organisations and Technology, King’s College London

Advisor
Professor Luc Moreau

Luc Moreau

Head of Department of Informatics, King’s College London

Advisor

Maire Byrne (DSTL)

Senior Research Scientist, DSTL

Industry Partner

DSTL