Typically, to test whether an autonomous system (AS) is trusted, researchers develop an appropriate use-case often as a one-off, never to be used again. For this project, we are documenting and reviewing scenarios that have been, or could be, used to evaluate human trust in AS, including some adapted from human-human trust experiments and others as envisioned by our TAS artists-in-residence.
We have consulted our stakeholders, formulated a taxonomy and are in the process of building an online library where researchers, regulators, policy-makers, industry professionals and interested members of the general public can browse, share and critique experimental use case scenarios classified in terms of the aspect of trust they test for, the way trust is measured, the type of risk involved, and a range of other factors.
OUR PUBLICATIONS
Towards an Open Source Library and Taxonomy of Benchmark Usecase Scenarios for Trust-Related HRI Research, Workshop on Advancing HRI Research and Benchmarking Through Open-Source Ecosystems, ACM/IEEE International Conference on Human-Robot Interaction (HRI), March 13, 2023, Stockholm, Sweden.
A Practical Taxonomy of TAS-related Usecase Scenarios, First International Symposium on Trustworthy Autonomous Systems, July 11-12, 2023, Edinburgh, Scotland. Winner: Best Poster TAS23.
SEE ALSO
Video presentation introducing contributions to the usecase library made by our TAS artists-in-residence.
Audio presentation discussing the project during the Living with AI Podcast on Equality and Autonomous Systems.