Case Studies 2 2 2 2
Case Study: Armed Forces Recruitment – Ensuring Privacy and Minimising Bias
Filed under:
Human Centricity
Bias and Harm Mitigation

  1.  What is the AI system for? 
  2.  From an ethical risk assessment perspective, where did you start? 
  3.  What did you find out? 
  4.  What did you do about it? 
  5.  What did you do next to ensure the risks were appropriately managed longer term? 

 
 
1. What is it for?
An AI-enabled system is to make the Armed Forces recruitment process more efficient. The tool works by screening applicants for various roles by analysing factors such as physical fitness, cognitive ability, and psychological resilience to enable appropriate matching of people to requirements at an early stage. 
  
 
2. From an ethical risk assessment perspective, where did you start?  
The team started with the Dstl AI Ethical Risk Assessment Toolkit (JSP 936 Part 2). This suggests starting with Dstl's Use Case Library to see what similar tools had already been developed and the challenges already identified (available via MoD or Dstl partners). Various HR-related products provided a useful starting point. This was followed by the working through the 5-Question Primer and its What, Why, Who, Where and How questions. Stakeholders likely to be affected by the system throughout its system lifecycle were identified, and then the team considered the Principle-Based Question Sheets to ensure that each of the Responsible AI ethics principles had been explicitly considered. See card: How can the MOD AI Principles help me assess and manage ethical risk? and, Who or what should be considered stakeholders for AI-enabled systems?.
 
  
3. What did you find out?
Unsurprisingly, while each of the Responsible AI principles could contribute useful insights into the development and operation of the AI-enabled system, the Mitigation of Bias and Harm, and consent processes in Understanding provided the main areas of concern.
As part of the recruitment process, individuals are required to share their social media profiles. Just as with the information provided by applicants (Decision-subjects), the use of any sensitive or personal data used in training the AI system (Data-Subjects) must comply with privacy laws and ethical guidelines. Applicants’ data should be collected, stored, and analysed with their informed consent, and its use must be directly relevant to the recruitment process. It is not clear that most applicants would be aware of just how much information there is about them already in the public domain, nor what additional information they are granting permission to share. 
 
 
4. What did you do about it?
It was important to get the right balance between the amount and type of information being sought, versus what was  required for the system to be able to make appropriate recommendations. SME input from psychologists, and data legal specialists was sought. From this it was understood that explaining why the information was being requested was as a key element of ensuring informed consent, while articulating this in an appropriate way so as not to scare off potential applications. In addition to getting the informed consent process correct, they advised on how potentially intrusive data practices that disproportionately impact certain groups can be eliminated, or at least minimised. 
The AI system is required to treat all applicants equitably, ensuring that individuals are not disadvantaged based on socioeconomic status, minority identity, or non-traditional backgrounds. However, testing demonstrates that the system is systematically favouring candidates from certain socioeconomic backgrounds and rejecting those from others, particularly minorities and individuals with non-traditional backgrounds. Further analysis indicates that the bias stems from the training data, which reflects historical recruitment patterns. 
 
 
5. What did you do next to ensure the risks were appropriately managed longer term?
In partnership with key stakeholders, including independent assessor (Examiners) a robust monitoring and oversight process is implemented to ensure that the system operates as required. 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.