Typically, to test whether an autonomous system (AS) is trusted, researchers develop an appropriate usecase often as a one-off, never to be used again. For this project, we are bringing such experimental usecase scenarios together in an online library where researchers, regulators, policy-makers, industry professionals and interested members of the general public can browse, share, and critique them, classified in terms of the aspect of trust they test for, the way trust is measured, the type of risk involved, and a range of other factors.
During this project’s first phase, we analysed use cases, consulted with stakeholders, conducted a survey, and developed a taxonomy. Preliminary work was documented in short papers for an HRI workshop and the first TAS Symposium. Our journal article, “Use cases for the Evaluation of Trustworthy AI: Taxonomy and Review”, has now been submitted for publication and is currently under review.
As a legacy, not only for this project but for the TAS programme as a whole, we are currently engaged in completion of the library’s development and curation of its initial collection.
You can read about their previous project ‘TAS Benchmarks Library and Critical Review’ here.