REFORMIST: Mirrored decision support fRamEwork FOR Multidisciplinary Teams in Oesophageal cancer

Using Machine Learning to ensure that cancer treatment decisions are data-driven, for better patient outcomes and health equality

Lead Contact: Dr Ganesh Vigneswaran, NIHR Clinical Lecturer in Interventional Radiology, University of Southampton

Embodied trust in TAS: robots, dance, different bodies

Working with professional dancers with differing disabilities, this project explores the idea of deeply embodied trust in autonomous systems through a process of bringing expert moving bodies into harmony with robots

Lead Contact: Professor Sarah Whatley, Director and Professor, Centre for Dance Research, Coventry University

Trustworthy and Useful Tools for Mobile Phone Extraction

Combining expertise in software development and digital forensics with Responsible Innovation activities to address a crisis of trust and practice in mobile phone extraction for criminal investigations.

Lead Contact: Helena Webb, Transitional Assistant Professor, University of Nottingham

Delivering Trustworthy Electoral Oversight:

Developing an Automated Analysis of Electoral Spending Disclosures in the UK

Investigating how electoral regulators and management bodies can benefit from using automated systems to better analyse and administer elections. It is widely accepted that democratic elections should be both free and fair, but also that citizens believe that elections exhibit these traits.

Lead Contact: Sam Power, Senior Lecturer in Politics, University of Sussex

Mapping trustworthy systems for RAS in social care (MAP-RAS)

Following on from a previous project using Lego Serious Play, we now seek to move from the imaginary to the concrete by concentrating on an actual robot to help with dressing which is under development. We’ll investigate how this might integrate within the existing health-social care ecosystem in the UK.

Lead Contact: Stevienna de Saille, Lecturer in Sociology, University of Sheffield

InterNET ZERO: Towards Resource Responsible Trustworthy Autonomous Systems

Harnessing interdisciplinary methods from across Design, Human Computing Interaction and Science & Technology Studies research, we will collaborate closely with a range of stakeholders to rethink current AS infrastructures and anticipate resilient and efficient digital energy transition pathways for Resource Responsible Trustworthy Autonomous Systems design.

Lead Contact: Dr Michael Stead, Lecturer in Sustainable Design, Lancaster University

Co-designing Inclusive and Trustworthy Conversational Agents on Basic Services with Older Adults (CA4OA)

Studying with interdisciplinary approaches how design choices of Conversational Agents (CAs) (modality, embodiment, anthropomorphism) and Older Adults’ mental models, attributes (e.g., gender) and conditions (e.g., loneliness) are related to trust in CAs.

Lead Contact: Professor Effie Lai-Chong Law, Professor of Computer Science, Durham University

Autonomous Systems for Forest ProtEctioN (ASPEN)

Developing an integrated framework for the autonomous detection, diagnosis and treatment of tree pests and diseases, and trialling key components.

Lead Contact: Dr Norman Dandy, Director of the Sir William Roberts Centre for Sustainable Land Use, Bangor University

Harnessing trust and acceptance in Human-AI Partnerships (HANA-HAIP)

Replacing human decision making with machine decision making results in challenges associated with stakeholders’ trust in autonomous systems.

Lead contact: Dr Asieh Salehi Fathabadi, Senior Research Fellow, University of Southampton

Leap of Faith 2.0 – Implementing Artificial Intelligence (AI) in Anesthetics Practice: Exploring Stakeholder Perspectives

In Leap of Faith 2.0, we investigate what checks must be made and who makes them for whom before taking autonomous systems designed for trust into the real world.

 

Lead contact: Sachini Weerawardhana, Research Associate, King's College London

Responsible Employment of Generative AI for Legal Services

As Large Language Models (LLMs) become increasingly sophisticated, emerging use cases threaten professions that have so far escaped the threat of automation, including psychotherapy, social services, and legal counsel. This project will examine the legal soundness and social acceptability of embedding LLMs in the workflow of legal professionals.

Key contact: Jeremie Clos, Assistant Professor in the School of Computer Science at the University of Nottingham

TAS UseCase Library

Typically, to test whether an autonomous system (AS) is trusted, researchers develop an appropriate usecase often as a one-off, never to be used again.  For this project, we are bringing such experimental usecase scenarios together in an online library where researchers, regulators, policy-makers, industry professionals and interested members of the general public can browse, share, and critique them, classified in terms of the aspect of trust they test for, the way trust is measured, the type of risk involved, and a range of other factors

Lead contact: Peta Masters, Research Associate, King's College London

TAS-GAIL: “Go Ahead I’m Listening…”

Would you trust a robot more if it exhibits more human-like behaviour? Deception and misleading communicative patterns in user-facing AI systems are some of the key problems that undermine the trustworthiness of AI and Autonomous Systems. While “dark patterns” are well understood in conventional interactive systems (e.g., eCommerce systems), they have not been given sufficient consideration in the context of anthropomorphised robots, particularly as these are becoming commonplace in our homes and workplaces.

Lead contact: Marta Romeo, Assistant Professor, Heriot-Watt University

TAS Responsible Research and Innovation (RRI) Impact & Legacy

Responsible research and innovation (RRI) foregrounds the social desirability, ethical acceptability and environmental sustainability of research and innovation. The aim of this project is to learn from the experiences of RRI across the TAS Hub and network, and to promote best practice and resources arising from this to the wider TAS and ICT research community.

Lead contact: Alan Chamberlain, Principle Research Fellow, Faculty of Science, University of Nottingham

TERPLAY (TElepresence Robots PLAYground)

Modern telepresence robots are semi-autonomous mobile devices that provide remote access into a setting, allowing users not only to video call but move around the space, either manually or autonomously (only indicating the location they would like the robot to move to). They are being adopted in domains such as museums and workplaces, however their uptake remains a challenge due to technical (e.g. limited view of surroundings), infrastructural (e.g. inaccessible spaces), and social (e.g. requiring assistance from others) factors. This project will develop a programme for continuous public engagement with organisations and stakeholders, with a particular interest in, but not limited to museum contexts, to co-explore telepresence robot adoption and deployment.

Lead contact: Gisela Reyes Cruz, Transitional Assistant Professor, University of Nottingham

Twitter feed