The Next Big Thing in Trustworthy AI: Codesign of Context Aware Trustworthy Audio Capture

Authored by the Codesign of Context-Aware Trustworthy Audio Capture project team

 

Do we really trust the autonomous audio systems (AAUS) that exist around us? If you’ve never thought about this question before, a crisp response to this might seem a bit overwhelming. If you find yourself inclined towards a negative response, you might next be asking yourself, “What is lacking in the state-of-the-art AAUS that hinder their way of winning public trust, reliance and usability?” These are the fundamental research questions the researchers at the University of Southampton initially raised and sought to understand public perceptions and beliefs towards the use of AAUS in different contexts. The project ‘’Co-Design of Context-Aware Trustworthy Audio Capture’’, led by Dr Jennifer Williams at the University of Southampton is the first work in this league that aims to explore perceptions of people’s trust in a broad variety of contexts for AAUS and probes ways that can enhance people’s trust by protecting their informational self-determination and privacy.

 

Diverse cultures, international communication concept. Human silhouette with speech bubbles.

We are living in a digital era, where the ubiquitous nature of technology has completely changed the way we live, feel secure and envision our future. When it comes to voice/speech or audio data, the nature of legislation for data protection is no different as audio biomarkers tend to contain personal identity attributes that are protected in the scope of human rights.  Misuse of this data highlights key issues of data protection, security and privacy. All AAUS present serious trust-related issues and socio-technical challenges for people in a multitude of domains, from domestic settings to entertainment and education. Audio deepfakes can be created with only several seconds of speech, devices are “always listening” and sometimes capturing audio unintentionally, and ethics and voice/speech ‘rights’ are particularly complex especially for the creative industries. All of these issues demonstrate the need to explore trust in AAUS in terms of privacy/security, explainability, and governance.

 

The researchers at the University of Southampton and the University of Nottingham have identified a disconnect between several key areas: machine learning audio tools that are used in AAUS, social/individual perspectives of trust in different contexts, guidelines for legal/policy oversight of voice information to protect individuals from a variety of harms, and consumer understanding of audio/voice technology capabilities. This project aims to fill this gap by inviting speech scientists, legal experts, social scientists, ethicists and creative industry partners, all at one platform and examines people’s understanding of the technology and level of trust in AAUS in different contexts.  The end goal of the activity is to look at what tools could be created to help assure trust for people concerned about their rights.

 

The project team will use a mixed research methodology that involves the use of questionnaires, interviews, vignettes, and audio experiences to collect, analyse and interpret the evidence for future audio technology. The undergoing survey will collect information about perceptions of trust, including responses to a questionnaire and audio modifications in various contexts, such as the audio captured by smart devices in the home/workplace, audio recordings in public spaces, or audio manipulation All these scenarios will help us understand the risks involved in posting of voice publicly, like on TikTok or YouTube, or capturing of voice implicitly, or impersonating someone using the deep fake technology. The exposure of survey participants to these different contexts will ultimately help us understand the societal viewpoints of trust that can be of great help to the tech companies as well as the policymakers to increase the protection of users and ensure their trust in technology.