As Large Language Models (LLMs) become increasingly sophisticated, emerging use cases threaten professions that have so far escaped the threat of automation, including psychotherapy, social services, and legal counsel. Adding to concerns about the impact of LLMs on professionals, the benefits and risks of this potential change are poorly understood. This project will examine the legal soundness and social acceptability of embedding LLMs in the workflow of legal professionals. We will do so in a series of experiments that seek to test the trustworthiness of LLM-generated legal advice, aiming to improve understanding of how to make generative AI more responsible.

This project aims to investigate the acceptance and reliability of LLM-use in low-stake legal contexts. We aim to (1) assess the legal soundness of the advice given by the LLM; (2) run a series of experiments on trust and reliance by laypeople and legal experts on the legal advice produced by that LLM; (3) engage with the legal experts to understand their perception of the LLM and the advice it produced; and (4) develop recommendations for legal expert usage of LLMs.

Our Team

Meet the team

Jeremie Clos

Assistant Professor in the School of Computer Science at the University of Nottingham

Principle Investigator

Eike Schneiders

Eike Schneiders is a postdoctoral researcher in the Mixed Reality Laboratory at the University of Nottingham

ACTING PRINCIPAL INVESTIGATOR

Horia Maior

Horia Alexandru Maior is a Transitional Assistant Professor in Computer Science at the University of Nottingham

CO-INVESTIGATOR

Liz Dowthwaite

Senior Research Fellow with the TAS Hub in Horizon DER at the University of Nottingham

Co-Investigator

Tina Seabrooke

Lecturer in Psychology at the University of Southampton

Co-Investigator
Josh Krook

Joshua Krook

Postdoctoral Researcher, Faculty of Law, University of Nottingham

Co-Investigator
Joel Fischer

Joel Fischer

Professor of Human-Computer Interaction at the School of Computer Science at the University of Nottingham

Co-Investigator

Natalie Leesakul

Natalie Leesakul  is an Assistant Professor in Law and Autonomous Systems in the University of Nottingham

Co-Investigator

Elliott Hauser

Assistant Prof, UT Austin

Advisor

Richard Hyde

Professor of Law, Regulation and Governance, University of Nottingham

Advisor

Hongmei (Mary) He

Professor of Robotics, University of Salford

Advisor