Robot manipulation combined with perception and self-verification capabilities has the promise to substantially improve the lives of impaired individuals by supporting their physical and mental well-being.
Here, we propose a collaborative project between researchers across the Trustworthy Autonomous Systems Hub and the TAS Nodes to bridge the gap in the state-of-the-art research in Human-Robot Interaction (HRI) and in formal verification to address the trust deficit in collaborative human-robot working environments.
We present an integrated approach for a human-centric, robot-assisted dressing, with self-verifying capabilities to ensure user safety during collaborative HRI.
Research Associate, University of Sheffield
Research Associate, University of York
Research Associate, University of Sheffield
Research Associate, University of York
Post-Doctoral Researcher, University of Manchester
Professor of Computer Science, Durham University
Lecturer, University of Lincoln
Head of the Software Testing Group, University of Sheffield
Professor of Medical Robotics
Lecturer in Work, Interaction & Technology, King’s College, London
Research Associate, University of York
Research Associate, King’s College London
Associate Professor, University of Southampton
Post-doctoral Research Associate, King’s College London
Post-doctoral Research Associate, University of Lincoln
Research Associate in the TAS Verifiability Node, Durham University
Professor of Software Engineering
Associate Professor, University of Lincoln
Associate Professor, University of Leicester
Professor of Software Verification, University of York
Professor of Computer Science, University of York
Senior Lecturer, University of Sheffield
Principal Staff – Robotics, The Johns Hopkins University Applied Physics Laboratory
Professor of Software Engineering, King’s College London
North Yorkshire City Council
Johns Hopkins University, Applied Physics Lab
Stroke Association