The potential harmful impacts associated with the use of AI in Defence cover wide range of unintended (and sometimes unexpected) negative effects on people and the environment. It is important to note that soemtimes in Defence, there may be intentional harms. However, that is not the focus of this tool. It is also important to note that sometimes, it is necessary to choose between different harms, and that there is often an opportunity cost of inaction - not doing something may well be even worse than the harms that result from taking action. That is one of the reasons that decisions in the military world can be so difficult.
Identifying different types of harm. While not exhaustive, types of harm could include:
Physical injury could come about through an overreliance on safety features leading to unnessary treatment in a medical context, a faulty fail-safe, or a manufacturing proces sthat exposes people to toxins.
Emotional or Psychological Injury could come about through the distortion of facts, gaslighting, or manipulating someone’s behaviour. Identity theft through impersonation or misattribution of views or actions could also cause harm through this type of injury.
2. Denial of consequential services
Opportunity loss could involve employment or housing descrinination, insurance or education opportunities.
Economic Loss could involve access to credit, unfair differential pricing or exploitation.
3. Infringement on human rights
Dignity loss could come about through dehumanisation or public shaming through the exposure of private or sensitive materials.
Liberty loss could come about through the use of predcutive policing tools, or social conformity could be monitored to generate “trustworthyness” scores.
Privacy loss could come about in many diferent ways. Revealing or exploiting information that an individual did not wish to share, or the inability to have childhood or minor indiscretions forgotten are just some exmaples.
Environmental impact can come about through unfairly depleting resources, wasteful practices or pollution.
4. Erosion of social & democratic structures
Manipulation can come about through disguising fake information, or deliberately seeking to trigger behaviour through exploiting known characteristics.
Social detriment can reinforce existing structures that include or benefit some while excluding others. This can happen through reinforcing stereotypes, amplifying the power of privilege, or causing the atrophy of certain skills.
This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context.
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.