The MOD AI Principles provide a framework for assessing and managing ethical risks in the application of Artificial Intelligence (AI) systems. These principles are designed to ensure that AI technologies are developed and deployed in a manner that aligns with the MOD's core values of responsibility, transparency, and accountability. Fundamentally, ethical risk assessment encourages teams to consider not just what AI can do, but also what AI should do given our organisational values and ethical principles.
How can the MOD AI Principles help with ethical risk assessment and management?
Five Question Primer
Stakeholder Assessment
Principle-Based Question Sheets
What about urgent capability requirements?
Alternative approaches: what is a structured risk assessment?
1. How can the MOD AI principles help with ethical risk assessment and management?
The MOD AI Principles, detailed in the document, form the cornerstone for ensuring AI is responsibly developed and deployed. A structured ethical risk assessment, guided by the MOD AI Principles, is essential for identifying and mitigating potential ethical risks in AI systems. By embedding human-centricity, responsibility, understanding, bias mitigation, and reliability into the risk assessment process, we not only ensure compliance with ethical commitments but also enhance the trustworthiness and effectiveness of its AI-enabled systems. These assessments, supported by strong leadership and tiered referral levels, enable the MOD to maintain operational excellence while upholding its ethical values.
Considering the ethical risks of AI may overlap and is complementary to other approaches that need to be taken as part of wider risk management. By looking at the whole system (the AI, its models and data, plus its interaction between the user and the system), can help identify new risks as well as different perspectives on existing or known risks.
2. Where to start?
It is important not to regard this as simply looking for problems. A useful risk assessment process will be balanced and will examine the ethical benefits as well as the costs of both deploying the AI-enabled system in a particular role or capability, as well as the benefits and costs of not doing so. It will also begin as early as possible, even at the discovery phase of the project, and should be revisited throughout the project lifecycle. First of all, one must consider what are the ethical risks of carrying out the activity whether or not an AI-enabled system is involved. Once an appreciation of the general risks of the activity are appreciated, one can then start to look at the specific benefits and risks associated with the use of AI to deliver or support the activity.
The MoD holds a Use Case Library of AI projects. This can be a very useful place to begin your research stage as it may well be that many of the ethical considerations of your own particular work have been met before. You will be able to see the types of ethical concern that are related to particular military functions or uses of AI, as well as what mitigation strategies were deemed appropriate to put into place. This library will grow as more projects are added to it. While it may not be open to the public, talk with your MoD partner to ensure that you can access what you need to.
After looking at the library, a helpful starting point for thinking about your own work is the Five Question Primer at Appendix A of JSP 936 Part 2. It is not necessary to answer all of the questions straight away – this is a starting point for you to think about areas where you will need more information or assistance in working out the appropriate way forwards. The JSP also provides help and guidance for how one goes about finding those answers (you don’t have to do this on your own!). One can think of this as a triage process, with the answers being recorded in an evidence log that will expand as data is collected. The initial questions are:
Q1: What are you trying to achieve?
Briefly describe:
a) The overall purpose of this Al-enabled capability
b) The military effects) and/or business function(s) you are aiming to deliver or support
c) Whether this Al-enabled capability is aiming to enhance or replace existing capabilities and/or deliver new capabilities
d) Any human decisions being replaced, supported, or informed
Q2: Why are you using Al?
Briefly describe:
a) The benefits of using Al rather than other approaches, including any existing approaches
b) Whether any alternatives have been considered, and if so what
Q3: Who might be impacted and how?
Briefly describe:
a) Who is likely to be directly and indirectly effected by this Al-enabled capability
b) Who might be impacted across the whole system lifecycle (discover, design, develop, deliver)
c) How they might be positively and negatively impacted, both intentionally and unintentionally
Q4: Where is the capability likely be used?
Briefly describe the intended context of use:
a) In the military domain (i.e. air, land, maritime, cyber, space): physical characteristics of the environment, mission types, wider workflow that it will be integrated within, other systems that it may need to interact with, user characteristics and time pressures (especially operational urgency, if relevant); or
b) In a business services context: the overarching purpose, objectives, wider workflow that it will be integrated within, other systems that it may need to interact with, and user characteristics
Q5: How might the Al-enabled system fail and what would the likely consequences be?
Briefly describe:
a) Possible unintended consequences caused by human or technical failures
b) Who would be impacted by these consequences
c) How the Al-enabled capability might be misused
d) What the likely impact of not using the capability (disuse) might be
3. Stakeholder AssessmentAI systems may have multiple stakeholders, depending on what the system is required to do and where it will be operating. Once all of the 5 Primer Questions above have been answered, it makes sense to start identifying those stakeholders. JSP 936 Part 2 Appendix B has a Stakeholder Identification Tool that can assist in working out what is appropriate to consider for your system. See card:
What legal considerations must I be aware of when assessing and managing ethical risk?. It also has a number of useful suggestions for how to engage with different types of stakeholders, depending upon what outcomes you are hoping to achieve.
4. Principle-Based Question Sheets
These very useful 1-page guides can be found at Appendix C of JSP 536 Part 2. They cover each of the 5 AI Ethics principles, set out the primary requirements for each and provide links through to useful official policy and documentation. Each sheet also gives you key questions to consider for each area.
5. What about urgent capability requirements?
One of the obvious concerns when working in an area that needs to be delivered as soon as possible, is that ethical risk assessment can be an unwanted burden just at the wrong time. The first observation is that there is no need to start from scratch and the Use Case Library may well contain similar work that has already done much of the heavy lifting for you. The MoD recognises that urgent capability requirements may require an amended approach to risk handling and ownership. This does not mean that ethics will not be important, however. Doing something stupid faster than your opponent is not a strategic advantage. The earlier you engage with ethical considerations, the easier they are to embed in the conceptual, design and implementation stages of your work. Trying to add such ethical concerns in at the later stages of a project after ignoring them at the earlier ones will inevitably cause delays and significant problems, uncertainty, delays and therefore costs.
6. Alternative Approaches: what is a structured risk assessment?
The MOD places a strong emphasis on conducting structured ethical risk assessments for AI-enabled systems to ensure responsible and effective implementation. JSP 936 Part 2 suggests a structured framework for approaching your risk assessment. This has been carefully considered, but there are obviously other ways that this could be undertaken. These assessments are a foundational step in aligning future AI deployment with the MOD AI Principles and mitigating potential ethical risks throughout the lifecycle of a project. The features of any well thought through risk assessment will include the following:
A. Must be ethically focused: A structured risk assessment explicitly integrates ethical considerations to address potential harms, biases, and impacts. It aligns with the MOD AI Ethical Principles, Human-Centricity, Responsibility, Understanding, Bias and Harm Mitigation, and Reliability, to ensure that risks are managed in a way that prioritises fairness, transparency, and human oversight.
- At project initiation: To establish ethical and operational foundations. At this stage, Responsibility ensures that oversight roles are defined early, Human-Centricity evaluates the system's potential impact on users, and Bias and Harm Mitigation identifies initial risks of bias or harm in design.
- When scope or outputs change: To reassess risks as new factors emerge.
- Ongoing reviews: To address latent risks as systems evolve and adapt. See card: How often are ethical risk assessments required?
C. Risk ratings and subsequent approvals given: Each AI use case is assigned a risk rating, critical, major, moderate, or minor. How such calculations are made is set out in JSP 892. There is also further guidance emerging so talk with your MoD partner about this so that they can advise directly, or connect you with an SME who can assist. The ultimate assessment of the level of risk determines the level of oversight and approval required:
- Moderate or minor: Managed at the business or team level.
- Major: Requires Defence-level oversight, such as Joint Requirements Oversight Committee review.
- Critical: Reviewed by Ministers or senior Defence officials.
D. Must consider trade-offs and operational context: Structured assessments consider trade-offs between ethical risks and operational effectiveness. For example, an AI Ethics Senior Accountable Officer (AI ESAO) evaluates the implications of military requirements while ensuring risks are mitigated.
E. Requires mechanisms for special applications: Specific applications, such as those involving kinetic effects or novel and contentious uses, require heightened scrutiny. These are referred to top-level Defence bodies, such as the Defence AI and Autonomy Unit (DAU), for further review.
The guidance provided by JSP 936 Part 2 gives you a good framework for thinking through your structured approach to risk assessment, including where to start and what to do in response different types of questions and answers.