Investigating the cybersecurity, human factors and trust aspects of screen failures during automated driving using threat analysis.
To replace existing deterministic regulatory verification activities that Data Protection Officers rely on, with automated reasoning techniques.
Building trust and fostering the adoption of autonomous systems (AS) by capturing, documenting and explicating ‘Master Narratives’. The project utilizes design research, documentary methods, and ethnographic analysis to explore the gap between citizen and expert viewpoints on AS.
Investigating the ethical risks and legal implications related to the collection, access and use of data in autonomous vehicles. Testing the usefulness of datasets and evaluating public acceptance of data recorders.
Demonstrating wound healing in the laboratory, and defining an envelope of operation that balances risks and benefits of machine learning and autonomous control.
Investigating the applicability of machine learning methods to support anticipatory planning for resilience, simulation-based AI techniques for policy appraisal in view of compound risks, and AI-based coordination mechanisms for resilience-aware decision making.
Using LEGO Serious Play workshops to Identify the conflicts and confluences in the imaginaries of robotic and autonomous systems (RAS) in the health-social care ecosystem
Developing a novel approach to assurance through participatory methodology, to underwrite the responsible design, development, and deployment of autonomous and intelligent systems in digital mental healthcare.
Designing algorithms for diabetes management during life transitions using co-design, provenance and explainable AI. This project aims to increase trust and understanding by bringing together clinicians, data scientists, and people with type-1 diabetes.
Understanding the effect of Situational Awareness and take-over request procedures on trust between drivers and highly autonomous vehicles.
Identifying how causal explanation can influence trust in an educational robotic platform, the Kaspar robot, which has been used as a tool for Autism education for more than a decade.
Creating infrastructure for access via web/VR interfaces and telepresence robots to UK laboratories researching TAS.
The aim of this project is to understand the contextual factors and technical approaches underlying trustworthy human-swarm teams.
These challenges present increased opportunities for human-robot collaborative teams but questions remain relating to trust towards the robot within the team and more broadly, the trust of affected groups (e.g., patients) towards tasks carried out by robot-assisted teams.
This project explores how trustworthy autonomous systems embedded in devices in the home can support decision-making about health and wellbeing.
Exploring the use of Socio-Technical Natural Language Processing (NLP) for classifying behavioural online harms within online forum posts (e.g. bullying; drugs & alcohol abuse; gendered harassment; self-harm), especially for young people.
Designing an exemplar, socially responsible, anthropomorphised, natural language interface for automated vehicles.
Investigating the mechanisms that can address consumers’ concerns when relinquishing human control to autonomous vehicles.