Date: 04 May 2021
Author: Mohammad Divaband Soorati, project lead contact, Alan Turing Research Fellow in Human-Machine Teaming, University of Southampton.
Technology is now allowing us to build affordable unmanned aerial vehicles (UAVs) that are capable of capturing high-quality data for long periods of time. We can fly a large group of UAVs for plenty of applications in the private and public sectors. It is clear that deploying a large group of UAVs can enhance our capabilities (e.g., mapping environments, delivering products, etc.) but there are many challenges involved in controlling a swarm.
The first question is to know if we can operate a large swarm of UAVs with only one or a few pilot(s). Granting a certain level of autonomy to the UAVs can eliminate the complexity of dealing with a swarm to a large extent. That way the operator can issue high-level commands and the swarm can take care of the execution. Autonomous swarms can be quite helpful when it comes to simple and predictable environments but imagine a disaster management situation. Should the pilots of the UAVs rely on the decisions made by the swarm? Obviously, we are far from trusting a swarm in an autonomous mode, but how much do we know about the underlying reasons for (mis/dis) trust?
We first need to understand the building blocks of trustworthy interaction between human operators and a swarm of UAVs. We can then go ahead and engineer the behaviour of the swarm. How can we increase situation awareness and help the drone operator to understand and predict the state of the swarm? Can we develop an AI system that recommends certain actions that could improve the performance of the human-swarm system? What if the communication channels between the UAV-UAV and UAV-Human operator are not reliable? Can we compress the data and therefore allow high performance in constrained environments? Can we study and control risk factors to facilitate risk-aware behaviour in swarms?
We are a team of around 15 experienced and early career researchers (professors, post-docs and PhD students) and we work closely with our industrial partners, Dstl and Thales, to find answers to some of these questions. Trustworthy human-swarm partnerships in extreme environments project will run for a year, from February 2021 to March 2022 as an agile research project in the UKRI Trustworthy Autonomous Systems programme. We will start by drawing use-cases and outlining the requirements for trust in human-swarm partnership with our partners and then start the swarm engineering process. We are running two exciting workshops with several industrial partners in May 2021. Our next blog post in Summer will focus on the outcomes of these workshops and our user-centred requirement specification for a trustworthy human-swarm interaction.