TARICS

Trustworthy Accessible Robots for Inclusive Cultural experienceS TARICS aims to create an interactive cultural experience for museum visits by provisioning a social robot to make the experience more accessible and inclusive for people with learning disabilities and/or autism.
lead contact: Maria Jose (Marisé) Galvez Trigo Lecturer in Computer Science University of Lincoln

Empowering Future Care Workforces

Scoping Capabilities to Leverage Assistive Robotics through Co-Design This project aims to understand how health and social care professionals can benefit from using assistive robotics on their own terms. And in ways that are safe, trustworthy, and meet the legal and ethical standards of their professions.
Lead contact: Dr Cian O’Donovan, Senior Research Fellow, University College London

Reimagining TAS with Disabled Young People

This project brings together Disabled Young People (DYP), social and computer science researchers and school and industry partners. We centralise the expertise and aspirations of DYP around questions of trust, resilience and capacity in relation to autonomous systems; thus embedding inclusion, equity, responsible research and innovation in studies of TAS.
Lead contacts: Dan Goodley Professor iHuman, University of Sheffield; | Lauren White Research Associate iHuman/Education, University of Sheffield

Trustworthy Autonomous Recommender Systems on Music Streaming Platforms

Even if Recommender Systems (RS) are intended to be consumer-centric, they tend to exhibit inherent biases in the recommendations made. This project will look at how these biases in the RS impact market competition between suppliers of products on the RS platform.
Lead contacts: Peter Ormosi, Professor of Competition Economics, University of East Anglia | Rahul Savani Professor of Computer Science, University of Liverpool

DAISY

Diagnostic AI System for Robot-Assisted A&E Triage. This project aims to prototype a robot-assisted A&E triage solution for reducing patient waiting time and doctor workload.
Lead Contact: Radu Calinescu, Professor of Computer Science, TAS Resilience Node PI, University of York

Virtually There

Investigating the effect of data sonification (data display using sound) on user trust, workload, stress and task performance when a multi-robot team is teleoperated using virtual reality (VR) in a nuclear decommissioning context.
Lead contact: Verity McIntosh, Senior Lecturer, University of West England

AgriTrust

Trust Assurance in Autonomous Cyber-Physical Agriculture Farms of Future
Lead Contact: Shishir Nagaraja, Reader in Computer Security, University of Strathclyde

Intersectional Approaches to Design and Deployment of Trustworthy Autonomous Systems

Exploring how intersectional approaches can inform the design and deployment of TAS toward the creation of an inclusive, fair and just world. We focus on the health and maritime sectors to address a range of individual, technical, systemic, place-based, professional, cultural and institutional issues from an intersectional perspective.
Lead contact: Caitlin Bentley, Lecturer in AI-enabled Information Systems, University of Sheffield

Communicating liability in Autonomous Vehicles

Examining how liability is perceived and communicated between AVs’ drivers, and 3rd-party insurers, and aims to develop integrated and commonly agreed-on mental models that help each stakeholder in their risks and responsibilities of vehicle control.
Lead Contacts: Elena Nichele, Research Fellow, University of Nottingham; Mohammad Naiseh, Assistant Professor in Data Science and AI, Bournemouth University

Co-Design of Context-Aware Trustworthy Audio Capture

Exploring people's perceptions of data protection, privacy, and security when using autonomous audio systems.
Leadon Contact: Jennifer Williams, Research Fellow, University of Southampton

The Citizen Carbon Budget

Investigating the technical feasibility and public acceptability and trustworthiness of an autonomous system-driven carbon budget. 
Lead Contacts: Dr Justyna Lisinska, Research Fellow, The Policy Institute, King's College London; Joel Fischer, Professor of Human-Computer Interaction, University of Nottingham

XHS: eXplainable Human-swarm Systems

Exploring how to enable a human-swarm teaming system to assess the criticality of a given situation and decide what needs to be displayed, how, when and why.
Lead Contact: Dr Mohammad Divband Soorati, Lecturer, University of Southampton

Understanding Internet and Technology Delusions of Suspicion

- Impact on engagement with cognitive training Investigating the barriers and facilitators to engaging with autonomous digital systems and online cognitive training, using a mixed methods study, co-produced with public involvement partners.
Lead Contact: Dr Emma Palmer Cooper, Lecturer and Researcher, Centre for Innovation in Mental Health, University of Southampton

Privacy Preserving Detection of Online Misinformation

Generating a typology of the misinformation strategies and associated linguistic constructs that people encounter online, as well as addressing barriers to digital information literacies to build public resilience to untrustworthy content, as recommended by The Royal Society (2022).
Lead Contact: Jeremie Clos, Assistant Professor, University of Nottingham

Safety and desirability criteria for AI-controlled aerial drones on construction sites

Investigating the control of aerial drones as these are employed on construction sites for monitoring or transportation tasks.
Lead Contact: Dr David Bossens, Research Fellow, University of Southampton

Trustworthy Autonomous Systems and Socio-Technical Innovation in Data Protection by Design and Default (DPbDD)

This project seeks to develop a socio-technical framework in collaboration with the Information Commissioner’s Office and Connected Places Catapult to enable the implementation of DPbDD principles in a real-world context. ​
Lead Contact: Daria Onitiu, Post Doctoral Researcher, University of Oxford

Methodological perspectives on the ethics of trust and responsibility of autonomous systems​

Convening a workshop series with the Governance and Functionality Nodes, to explore theoretical and empirical approaches to understanding the ethics of trust(worthiness) in autonomous systems.
Lead Contact: Benedicte Legastelois, Research Associate, King's College London

TAS ART: Augmented Robotic Telepresence Integrator

Exploring the potential of Augmented Robotic Telepresence (ART) to improve on trustworthiness, inclusion/accessibility and independence afforded to remote users of Mobile Robotic Telepresence (MRP).
Lead Contact: Ayse Kucukyilmaz, Assistant Professor, Robotics and Autonomous Systems, University of Nottingham

Digital Twins for Human-Assistive Robot Teams

Investigating approaches for developing and using digital twins which incorporate co-dependent and co-evolved models representing patients and assistive robots.
Lead Contact: Dominic Price, TAS Hub and Horizon Digital Economy Research Fellow, University of Nottingham

Preserving Marine Life in a Shipping World: AI to the Rescue (PREVAIL)

Developing a framework focused on the deployment of verified, explainable Deep Neural Networks that will be the underpinning of an advisory system to reduce whale strikes.
Lead Contact: Dr Calum Corrie Imrie, Research Associate in Computer Science, University of York

TAS RRI Responsible Research and Innovation

Helping the researchers in the TAS Hub and network by promoting and supporting a range of responsible research practices.
Lead Contacts: Dr Alan Chamberlain, Senior Research Fellow, University of Nottingham; Chris Greenhalgh, Professor of Computer Science, University of Nottingham

Leap of Faith

Exploring the conditions under which a human subject can be persuaded to trust an autonomous search and rescue system.
Lead Contact: Sachini Weerawardhana, Research Associate, King's College London

Verifiably Human-Centric Robot Assisted Dressing

An integrated approach for a human-centric, robot-assisted dressing, with self-verifying capabilities to ensure user safety during collaborative HRI.​
Lead Contact: Dr Yasmeen Rafiq, Research Associate, University of Sheffield

Verifiably Safe and Trusted Human-AI Systems (VESTAS)

VESTAS intends to provide a roadmap on challenges and technical requirements to be addressed for the design and development of verifiably safe and trusted autonomous human-AI systems. This roadmap includes guidelines that different stakeholders can utilise.
Lead Contact: Dr Asieh Salehi Fathabadi, Senior Research Fellow, University of Southampton

TAS Benchmarks Library and Critical Review

Formulating a taxonomy and developing an accompanying online library from which researchers can reference use cases - by domain, and by whatever aspect of trust the use case tests for (e.g., human willingness to comply, conformance to expectation, etc.).
Lead Contact: Peta Masters, Research Associate, King’s College London

Imaging predictors of Oesophageal Cancer MDT patient outcomes

The Upper Gastrointestinal (UGI) multidisciplinary team (MDT) makes critical treatment decisions in every oesophageal cancer (OC) patient’s journey (e.g. surgery or chemotherapy). Machine learning (ML) approaches offer the potential to standardize and produce consistent, data-driven decisions.
Lead Contact: Tim Underwood, MRC Clinician Scientist at University of Southampton

Foundations of a Trustworthiness Risk Assessment Framework for AI Systems (F-TRIADS)

A roadmap for creation of an open trustworthiness risk assessment community as part of the TAS Hub
Lead Contact: Michael Boniface, Director of the IT Innovation Centre University of Southampton

Open-Source Interaction Interface for Human-Swarm Teaming

Motivated by RRI, this project aims at providing an open-source platform that is suitable for experimentation with a large group of semi-autonomous systems that can be monitored and controlled by a few human operators.
Lead Contact: Mohammad D Soorati. Lecturer - University of Southampton

Principles for Building Fair and Trustworthy Autonomous Fraud Detection in Retail

Systems that autonomously detect fraud in retail returns are increasingly being used and are controversial. In this project, we will create the principles and guidelines for ensuring that such systems are fair and transparent.
Lead Contact: Prof Enrico Gerding, Professor in Artificial Intelligence School of Electronics and Computer Science (ECS)

Digital Forensics Platform

This project will design, implement and test an open-source modular digital forensics platform. The unique qualities will be the ability to provide a filtering service to offer some proportionality in investigations.
Lead Contact: Helena Webb, Transitional Assistant Professor University of Nottingham

Mapping Contracts and Licenses around Large Generative Models: private ordering and innovation

Pretrained models for creating AI images from text prompts and even text-to-video are being hailed as inaugurating an new era in AI development. Private regulation by terms and conditions of use is one important avenue to control these models.
Lead Contact: Prof. Lilian Edwards, Professor of Law - Innovation and Society Newcastle University

Critically Exploring Biometric AI Futures

The project begins by mapping the state of the art and different socio-technical and legal issues raised by future uses of biometric artificial intelligence systems in law enforcement.
Lead Contact: Dr Lachlan Urquhart, Senior Lecturer in Technology Law & Human-Computer Interaction. University of Edinburgh

CHAPTERs – Cognitive HumAn in the looP TEleopeRations

CHAPTERs focuses on developing a testbed for user studies that involve humans and cobots, particularly in teleoperations scenarios.
Lead Contact: Max L Wilson, Associate Professor University of Nottingham

Twitter feed