This study explores transparency in a command and control (C2) context, using a low-fidelity air traffic control game, which is real-time, dynamic, and time constrained. Autonomous agent performance, anthropomorphism, and other factors have been a major focus in studying trust in human-autonomy teaming (HAT). We propose that agent predictability may be an important area of investigation. Where autonomy is imperfect, increasing its predictability may reduce the incidence of mistrust and dis- use. Indeed, we suggest that predictability is a quintessential indicator of agent transparency, which we propose to encapsulate in a model of trust that is based on predictability. We speculate that cognitive fit and cognitive fit theory may have a large role to play in enabling predictability. This has implications for transparency design in self driving cars, domestic household robots, as well as other industrial applications where autonomous systems and agents are used.
- 1 view