What is the AI system for?
From an ethical risk assessment perspective, where did you start?
What did you find out?
What did you do about it?
What did you do next to ensure the risks were appropriately managed longer term?
1. What is it for?
A Closed Loop Adaptive System incorporating AI has been developed to improve mission success for military divers working in extreme situations such as special operations. The system monitors the diver’s performance and physiology to ensure that their ability to make decisions is not compromised. the system can warn the user, or a remote observer if the decision-making of the diver becomes compromised. In extreme cases this technology could save lives as well as increasing mission success rates.
2. From an ethical risk assessment perspective, where did you start?
The team started with the Dstl AI Ethical Risk Assessment Toolkit (JSP 936 Part 2). This suggests starting with Dstl's
Use Case Library to see what similar tools had already been developed and the challenges already identified (available via MoD or Dstl partners). Various HR-related products provided a useful starting point, as did the introduction of collision avoidance systems in fast jets. This was followed by the working through the
5-Question Primer and its
What,
Why,
Who,
Where and
How questions.
Stakeholders likely to be affected by the system throughout its system lifecycle were identified, and then the team considered the
Principle-Based Question Sheets to ensure that each of the Responsible AI ethics principles had been explicitly considered.
See card: How can the MOD AI Principles help me assess and manage ethical risk? and, Who or what should be considered stakeholders for AI-enabled systems?.3. What did you find out?
The initial groundwork demonstrated that there were a number of questions raised that needed to be considered as their answers will raise further challenge that need to be considered. For example, should the system ever take decisions by itself? If the vital signs of the diver went below a certain point, should the system initiate a surfacing protocol or warn other operators in the water? If it is decided that even an emergency response should only be initiated by a human rather than by the system, who should the humans in the decision loop be?
See card: What does having a person “in the loop” actually mean?. Could/should the monitoring party override autonomy and consent of an individual in a life-threating situation if they have not been specifically told not to?
See card: What is the difference between "meaningful human control" and "appropriate human control"?. If so, at what point? User trust was a key point that came out of the stakeholder engagement, along with safety.
See card: What does “trust” mean in relation to AI systems? How much trust is enough?. This was clear from the experience of collision avoidance systems in aircraft. Despite their clear safety benefits, they were slow to be introduced because pilots were loath to surrender their control over situations.
4. What did you do about it?
To maintain user trust in the system, it is likely that levels of consent would need to be agreed on prior to an operation and would need to remain an on-going consideration throughout the decision-making process. Consent in this case, needs to be a dynamic principle beyond the decision to use the device, to remain a factor throughout its application.
See card: What does "informed consent" mean?. Understanding what that would mean in practice meant further engagement with
SMEs, including medical specialists as well as legal specialists.
To work with the proposed system, a diver would first need to understand and agree prior to an operation whether monitoring should be conducted remotely by a human, singularly by an AI, or jointly conducted by human machine teaming.
See card: How much understanding is sufficient? Who needs to know what?. Additionally, the diver would also need to consent on whether, if at any point deemed incapable of decision making due to complete or partial loss of cognitive ability, the monitoring party could step in and override decision making on behalf of the driver. Consent, autonomy, and privacy would therefore need to be agreed and understood before the start of an operation.
5. What did you do next to ensure the risks were appropriately managed longer term?
It was identified that complacency would be one of the biggest factors in undermining the ongoing dynamic consent process, with consent quickly turning into an assumed element of the process because people are using the equipment. Having a physical check built into the system initiation when the equipment was matched with a new user or new session was seen as one way of ensuring that consent was considered at appropriate points, as well as having a monitoring process that could randomly sample the consent confirmations to provide input to an oversight process.