Case Studies 6 6 6 6
Case Study: Digital Dog Tags – Security and Ensuring Sufficient Trust
Filed under:
Understanding
Reliability

  1.  What is the AI system for? 
  2.  From an ethical risk assessment perspective, where did you start? 
  3.  What did you find out? 
  4.  What did you do about it? 
  5.  What did you do next to ensure the risks were appropriately managed longer term? 

 

1. What is it for?

To improve medical situational awareness, “digital dog tags” have been proposed for service personnel to provide an easily accessible medical record that accompanies the individual at all times. This will greatly assist emergency medics who would normally have to make rapid decisions with imperfect information, increasing diagnostic and treatment efficacy.


2. From an ethical risk assessment perspective, where did you start? 

The Use Case Library (available via MoD or Dstl partners) provided a useful starting point to see where similar technologies may have been employed elsewhere and what ethical challenges and mitigations had previously been proposed and found acceptable. The 5-Question Primer and its What, Why, Who, Where and How questions from JSP 936 Part 2 expanded those considerations, establishing that the technology works by monitoring each individual’s psychophysiological biometric signature, thus determining what “normal” is for that individual in different types of environmental context, including high stress situations. As a Closed Loop Adaptive System (CLAS), the device is intended to locally record the neurological and physical responses of the user. This information informs an AI system that can then indicate whether the current symptoms are within normal parameters given the environmental factors. When combined with the traditional medical record, this provides a very valuable diagnostic capability increasing treatment effectiveness even in field conditions. Stakeholders were identified, unsurprisingly with Defence Medical Services being a key institution to engage with, along with Dstl scientists and experts in medical legal and regulatory compliance. Finally, the team considered the Principle-Based Question Sheets to ensure that each of the Responsible AI ethics principles had been explicitly considered. Again these focused on Security and the broader Reliability issues, as well as GDPR-related aspects of data privacy relating to both Human Centricity and Responsibility.


3. What did you find out?

The Use Case Library helped the team to quickly identify security as a key ethical concern due to the private nature the information related to the system. This linked strongly with the questions prompted by the Principal-Based Question Sheets focused on security and the broader Reliability issues, as well as GDPR-related aspects of data privacy relating to both Human Centricity and Responsibility. See cards: Measuring Security: how does one decide if an AI system is “suitably” secure? and, What does GDPR mean for my AI-enabled system?. It was essential that the data generated is not accessible to any outside party apart from the medics who are dealing with the injured patient. The risk of personal or sensitive information being accessed by a malevolent third party is clearly linked to both the safety of the system and those using it, and is closely linked to the trust that would be required for the technology to be adopted successfully. However, Dstl scientists were able to explain that, due to the way that the CLAS functions, the risk of data breaches were actually fairly limited. The actual medical record is not stored on the device – only a link to the central record. Some anonymised data will be captured and stored for training and auditing purposes, but the neurological and physiological data generated is only explicable when understood at the systemic rather than the individual level. This means that privacy concerns for many aspects of the device were in fact limited.


4. What did you do about it?

The device needs to be robust in its security features, but also accessible when required by medical personnel with the appropriate equipment. This requires extensive testing in both routine and hostile environments to ensure there were no unanticipated effects and that the system could be relied upon even in unfavourable environments. See cards: Measuring Reliability: how do we decide if an AI system is “suitably” reliable? and, Measuring Robustness: how do we decide if an AI system is “suitably” robust?. It was realised that communication was going to be a key element in a successful introduction of the tool, as explaining the way the device works clearly and effectively is going to be a key element in building the essential trust required for routine deployment. An appropriate informed consent process needs to be developed that ensures that personal autonomy is respected. See card: What does "informed consent" mean?.


5. What did you do next to ensure the risks were appropriately managed longer term?

Similar projects have demonstrated that once the novelty of the system wears off, this type of system can quickly become entirely normalised. Ensuring that the consent process does not simply become a tick box exercise therefore becomes important.

Once the system was deployed operationally, longitudinal data will need to be collected to ensure that the system does perform reliably under diverse and unpredictable conditions. That long-term evidence can demonstrate appropriate robustness in the face of changing operational realities and the changing character of war. See card: Why do good people do bad things? What can we do about it?.

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.