Towards Understanding the Cognitive Aspects of Transparency in Human-Autonomy Teaming

This study explores transparency in a command and control (C2) context, using a low-fidelity air traffic control game, which is real-time, dynamic, and time constrained. Autonomous agent performance, anthropomorphism, and other factors have been a major focus in studying trust in human-autonomy teaming (HAT). We propose that agent predictability may be an important area of investigation. Where autonomy is imperfect, increasing its predictability may reduce the incidence of mistrust and dis- use. Indeed, we suggest that predictability is a quintessential indicator of agent transparency, which we propose to encapsulate in a model of trust that is based on predictability. We speculate that cognitive fit and cognitive fit theory may have a large role to play in enabling predictability. This has implications for transparency design in self driving cars, domestic household robots, as well as other industrial applications where autonomous systems and agents are used.

PDF Articles
/sites/default/files/articles/05.%20Towards%20Understanding%20the%20Cognitive%20Aspects%20of%20Transparency%20in%20Human-Autonomy%20Teaming.pdf
Download Count
66
Update DOI
Off