Trust is commonly defined as: a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behaviour of another (Rousseau et al., 1998).
This definition highlights two key elements:
- The willingness to be vulnerable and the belief that the other party will act in a reliable, ethical, and predictable manner.
- Trust is essential in interpersonal, organisational, and systemic relationships and develops through consistent and transparent behaviour over time.
- Predictability-Based Trust: Relies on the consistent and reliable behaviour of the system.
- Interpersonal Trust: Involves understanding the system's underlying values, beliefs, and dispositions.
Roff and Danks argue that while predictability-based trust might be appropriate or relevant for simple autonomous tools, the unpredictable and dynamic nature of warfare requires a different type of trust to be developed between operator and AI-system due to the different type of autonomy required by such systems.
Predictable autonomy operates based on predefined rules and behaviours, ensuring consistent and reliable actions in well-defined environments. This makes the system's actions foreseeable and easier to anticipate, which can facilitate trust. In contrast, adaptive autonomy refers to systems capable of learning and evolving their behaviours in response to dynamic and unpredictable environments. While this enables greater flexibility, the inherent unpredictability of adaptive systems complicates the establishment of trust, as operators may struggle to anticipate their actions. Roff and Danks argue that while predictable autonomy supports a level of trust due to its consistency, adaptive autonomy poses significant challenges for fostering the deep interpersonal trust necessary for effective human-machine collaboration in military contexts.
They contend that effective use of AWS or any system that exhibits adaptive autonomy, requires interpersonal trust, which is challenging to develop because current military acquisition, training, and deployment practices do not facilitate a deep understanding of these systems.
To address this challenge, Roff and Danks propose three changes to practices current at that time:
1. Enhanced training programs: Develop comprehensive training that allows operators to understand the decision-making processes of AWS.
2. Transparent system design: Ensure AWS are designed with transparency to make their operations understandable to human operators.
3. Ongoing evaluation: Implement continuous assessment protocols to monitor AWS behaviour and maintain trustworthiness.
1. Enhanced training programs: JSP 936 mandates that MOD personnel involved with AI systems are suitably qualified and experienced, ensuring they understand system behaviours and limitations. (Card 3:28 Training versus Education: What is the difference?)
2. Transparent system design: The directive emphasizes transparency, explainability, and interpretability in AI system design, enabling operators to comprehend and predict system actions. This transparency is crucial for developing interpersonal trust, as operators can better understand the decision-making processes of AWS. (Card 3:25 How Much Understanding is enough?)
3. Ongoing evaluation: JSP 936 requires continuous assessment of AI systems throughout their lifecycle, including monitoring performance and managing risks. This ongoing evaluation ensures that AWS operate reliably and as intended, addressing concerns about unpredictability and fostering trust. (Card 3:45 How do we decide if an AI system is reliable enough?)
Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). "Not So Different After All: A Cross-Discipline View of Trust." Academy of Management Review, 23(3), 393–404.
Heather M. Roff & David Danks (2018) “Trust but Verify”: The Difficulty of Trusting Autonomous Weapons Systems, Journal of Military Ethics, 17:1, 2-20. https://doi.org/10.1080/15027570.2018.1481907