Would you trust a robot more if it exhibits more human-like behaviour? Deception and misleading communicative patterns in user-facing AI systems are some of the key problems that undermine the trustworthiness of AI and Autonomous Systems. While “dark patterns” are well understood in conventional interactive systems (e.g., eCommerce systems), they have not been given sufficient consideration in the context of anthropomorphised robots, particularly as these are becoming commonplace in our homes and workplaces. On the one hand, there is an opportunity to explore the benefits of improved communicative human-likeness, but on the other there is also a worry whether this may lead to problematic disclosure of personal or sensitive information to such systems. It is this fine line that the project is aiming to explore through practical prototyping and experimentation in order to develop implications for future responsible and trustworthy human-like robotics.
This project aims to build a demonstrator (human-like active listener) that provides positive non-verbal backchannels (e.g., nodding) to a human to test whether this type of behaviour can lead to overtrust and oversharing through a user study. The result will be an open source system for the TAS community to build robots with adequate human-like behaviours.