What is the AI system for?
From an ethical risk assessment perspective, where did you start?
What did you find out?
What did you do about it?
What did you do next to ensure the risks were appropriately managed longer term?
1. What is it for?
A military capability requires a helmet to be worn, linking the user to an AI system via the internal sensors on the inside of the helmet. This allows the user to interact efficiently with an AI system to enable complex synchronisation of multiple military assets.
2. From an ethical risk assessment perspective, where did you start?
The team started by consulting the Dstl AI Ethical Risk Assessment Toolkit (JSP 936 Part 2). This suggests starting with Dstl's As a Closed Loop Adaptive System (CLAS), Dstl's Use Case Library (available via MoD or Dstl partners) has multiple examples of such technology being used in multiple military applications. This gave some useful insight into recurring these pertinent to all such technologies as well as some good pointers towards more specific ethical concerns relating to human machine teaming. The 5-Question Primer and its What, Why, Who, Where and How questions expanded those considerations, establishing that the technology made the management of a complex defence system in a combat situation possible by harnessing a trained operators’ awareness and judgement with a neural control interface. Stakeholders were identified, likely to interact with, or be affected by, the system throughout its system lifecycle were identified. Finally, the team considered the Principle-Based Question Sheets to ensure that each of the Responsible AI ethics principles had been explicitly considered.
3. What did you find out?
One of the early observations is that although this is a device that interacts with human physiology, it is not invasive and is designed for and intended to be used as a performance enhancement device rather than a medical device. This means that it is very unlikely to come under the UK’s Medicines and Healthcare Products Regulatory Agency (MHRA) regulatory body.
See card: What legal considerations must I be aware of when assessing and managing ethical risk?.Although the device is interacting and responding to the neurological and physical responses, being a closed system, the data generated is not accessible to any outside party. Some anonymised data will be captured and stored for training and auditing purposes, but that data is only explicable when understood at the systemic rather than the individual level. This means that
privacy concerns were limited.
See card: What do we mean by "harms" and what are they?.Appropriate
understanding of the user was clearly affected by the way that information was presented.
See card: How much understanding is sufficient? Who needs to know what?. There was clearly a risk of sensory or cognitive overload (
harm). This then had an impact on the way information was responded to and interacted with, demonstrating a link to appropriate
control. See card: What is the difference between "meaningful human control" and "appropriate human control"?. Much more problematic was a potential
equity issue that arose almost as an afterthought – it turns out that certain skin tones/hair colours/textures are incompatible with the technology due to the sensors in the interface. See card:
How can we take into account "human diversity"?. The materials employed currently have no alternatives.
4. What did you do about it?
The issue of how to present information safely and appropriately in this context required engaging with SMEs. This input, combined with testing, helped determine what the optimum prestation was, and how to combine different types of sensory input (visual, neural, haptic etc) in the most effective way without impairing the user in any way.
An appropriate training and information package was developed for operators and training development coordinators (creators) to ensure that people were fully prepared to use the system.
The equity question was harder to address. The particular skin tones and hair colours were not related to a legally protected characteristic (i.e. an ethnicity) and were instead focused on those with "ephelides" (skin freckles) when combined with those with a “mutation in the MC1R gene” (red or ginger hair). Because the testers (creators) quickly determined that the system would not work with people who presented with this combination of characteristics, they had largely been missing from the data subjects. The lack of legally recognised discrimination in the early test samples meant that no-one had raised it as an issue.
While the military necessity argument was seen to trump the equity concern in terms of adopting the new technology (which was deemed to be of extreme importance to get available operationally quickly), SMEs and stakeholders in the military were consulted to ensure that those excluded from being able to carry out this military role due to physical characteristics, were not disadvantaged in career terms.
5. What did you do next to ensure the risks were appropriately managed longer term?
In addition to organisational safeguards being introduced to ensure that promotion opportunities were not being unfairly limited, and individuals were not being inadvertently disenfranchised, examiners were required to sample organisational data over time to ensure that these safeguards were working in terms of overall organisational health. Operators are to be assessed at fixed points to ensure that they are managing the sensory and cognitive load safely. This information will be fed back to medical and training development coordinators to ensure that the best support package is maintained.