Replacing human decision making with machine decision making results in challenges associated with stakeholders’ trust in autonomous systems. In the first phase of the project, VESTAS (Verifiably Safe and Trusted Human-AI Systems), we provided a research roadmap, presenting challenges and technical solutions to address the design and development of safe and trusted autonomous systems. We are progressing to deliver a roadmap involving initial guidelines that can be utilised by stakeholders from our DSTL partner.
The image below is the HANA-HAIP Roadmap. More information is available in this published paper.
This HANA-HAIP project aims to build on the VESTAS outputs to enhance the impact to industrial partners and public by working collaboratively with our stakeholders to inform policy and practice. We continue to utilise interdisciplinary approaches, grounded in social science conceptualisations of trust and computer science approaches to safety.
HANA-HAIP aims to expand the contextual (broader domains of application) and inclusive (expand the varied categories of stakeholders) view to evaluate the VESTAS-proposed trust techniques and interventions, followed by performing a trial of the techniques on a real-world case study provided by DSTL.
HANA-HAIP sees integrability into society as crucial for an effective deployment of autonomous systems and thus integrates technical, societal, and practical aspects. Capturing societal aspects and concerns of different stakeholders is key to ensure that autonomous systems “improve rather than harm our physical and mental well-being” and in a trusted way “benefit rather than damage our society and economy.”
HANA-HAIP aims to deliver an integrated socio-technical framework addressing the identified trust challenges and technical solutions for harnessing trust in autonomous systems.