Using Machine Learning to ensure that cancer treatment decisions are data-driven, for better patient outcomes and health equality
Working with professional dancers with differing disabilities, this project explores the idea of deeply embodied trust in autonomous systems through a process of bringing expert moving bodies into harmony with robots
Combining expertise in software development and digital forensics with Responsible Innovation activities to address a crisis of trust and practice in mobile phone extraction for criminal investigations.
Developing an Automated Analysis of Electoral Spending Disclosures in the UK
Investigating how electoral regulators and management bodies can benefit from using automated systems to better analyse and administer elections. It is widely accepted that democratic elections should be both free and fair, but also that citizens believe that elections exhibit these traits.
Following on from a previous project using Lego Serious Play, we now seek to move from the imaginary to the concrete by concentrating on an actual robot to help with dressing which is under development. We’ll investigate how this might integrate within the existing health-social care ecosystem in the UK.
Harnessing interdisciplinary methods from across Design, Human Computing Interaction and Science & Technology Studies research, we will collaborate closely with a range of stakeholders to rethink current AS infrastructures and anticipate resilient and efficient digital energy transition pathways for Resource Responsible Trustworthy Autonomous Systems design.
Studying with interdisciplinary approaches how design choices of Conversational Agents (CAs) (modality, embodiment, anthropomorphism) and Older Adults’ mental models, attributes (e.g., gender) and conditions (e.g., loneliness) are related to trust in CAs.
Developing an integrated framework for the autonomous detection, diagnosis and treatment of tree pests and diseases, and trialling key components.
Replacing human decision making with machine decision making results in challenges associated with stakeholders’ trust in autonomous systems.
In Leap of Faith 2.0, we investigate what checks must be made and who makes them for whom before taking autonomous systems designed for trust into the real world.
As Large Language Models (LLMs) become increasingly sophisticated, emerging use cases threaten professions that have so far escaped the threat of automation, including psychotherapy, social services, and legal counsel. This project will examine the legal soundness and social acceptability of embedding LLMs in the workflow of legal professionals.
Typically, to test whether an autonomous system (AS) is trusted, researchers develop an appropriate usecase often as a one-off, never to be used again. For this project, we are bringing such experimental usecase scenarios together in an online library where researchers, regulators, policy-makers, industry professionals and interested members of the general public can browse, share, and critique them, classified in terms of the aspect of trust they test for, the way trust is measured, the type of risk involved, and a range of other factors
Would you trust a robot more if it exhibits more human-like behaviour? Deception and misleading communicative patterns in user-facing AI systems are some of the key problems that undermine the trustworthiness of AI and Autonomous Systems. While “dark patterns” are well understood in conventional interactive systems (e.g., eCommerce systems), they have not been given sufficient consideration in the context of anthropomorphised robots, particularly as these are becoming commonplace in our homes and workplaces.
Responsible research and innovation (RRI) foregrounds the social desirability, ethical acceptability and environmental sustainability of research and innovation. The aim of this project is to learn from the experiences of RRI across the TAS Hub and network, and to promote best practice and resources arising from this to the wider TAS and ICT research community.
Modern telepresence robots are semi-autonomous mobile devices that provide remote access into a setting, allowing users not only to video call but move around the space, either manually or autonomously (only indicating the location they would like the robot to move to). They are being adopted in domains such as museums and workplaces, however their uptake remains a challenge due to technical (e.g. limited view of surroundings), infrastructural (e.g. inaccessible spaces), and social (e.g. requiring assistance from others) factors. This project will develop a programme for continuous public engagement with organisations and stakeholders, with a particular interest in, but not limited to museum contexts, to co-explore telepresence robot adoption and deployment.