In the previous Leap of Faith project installation, the team built a collection of computer simulations to understand the design, situational and human factors involved in generating instantaneous trust when an autonomous system communicates instructions to a human.
In Leap of Faith 2.0, we investigate what checks must be made and who makes them for whom before taking autonomous systems designed for trust into the real world.
We answer this question from a sociotechnical lens by taking a participatory approach with stakeholders in anesthetics.
The findings will feed into the co-production of human-autonomous system interaction use cases, validated by stakeholders, which will bridge the gap between lab experiments and real-world deployment of TAS.