Celebrating the achievements of the £33M UKRI Trustworthy Autonomous Systems Programme

Our Definitions

“Autonomous System”

A system involving software applications, machines, and people, that is able to take actions with little or no human supervision.1

“Trust in Autonomous Systems”

Trust is defined in many ways by different research disciplines. The TAS programme focuses on those notions that concern the relationship between humans (individuals and organisations) and autonomous systems.

“Trustworthy Autonomous Systems”

Autonomous systems are trustworthy when their design, engineering, and operation ensures they generate positive outcomes and mitigates potentially harmful outcomes trusted depends on a number of factors including but not limited to:

• Their robustness in dynamic and uncertain environments.

• The assurance of their design and operation through verification and validation processes.

• The confidence they inspire as they evolve their functionality.

• Their explainability, accountability, and understandability to a diverse set of users.

• Their defences against attacks on the systems, users, and the environment they are deployed in.

• Their governance and the regulation of their design and operation.

• The consideration of human values and ethics in their development and use.

1 Autonomous system is also used in different disciplines to specifically mean Robots, routing protocols for the Internet (https://en.wikipedia.org/wiki/Autonomous_system_(Internet)), or AI-powered systems. Our definition includes systems involving both humans and machines working together (e.g., human-agent collectives or human-machine teams), automated decision-making processes (e.g., automated recruitment, facial recognition systems), and Machine-to-machine or Human-to-Human trust are also a an important concern and may be relevant to TAS but these are not the central to the TAS programme.