The UK's AI Principles 3 3 3 3
How often are ethical risk assessments required?
Filed under:
Human Centricity
Responsibility
Understanding
Bias and Harm Mitigation
Reliability

Ethical risk assessments are not a one-off activity, they are required at project initiation, when material changes occur, and as part of ongoing review processes. By integrating these assessments throughout the AI lifecycle and aligning them with the MOD’s AI Ethical Principles, Defence ensures AI-enabled systems are safe, effective, and ethically sound in dynamic operational and regulatory environments.
1. At project initiation 
2. When something significant changes  
3. As part of on-going review 
 
1. At project initiation: Ethical risk assessments are essential at the beginning of a project to embed ethical considerations from the outset. This ensures AI systems are developed with a robust foundation of accountability, transparency, and alignment with MOD values. Alignment with the Ethics Principles at this stage is key as it establishes ethical, legal, and operational benchmarks before development begins. Fore example:
  • Responsibility: Clearly defines oversight roles and accountability mechanisms early in the project.
  • Human centricity: Evaluates potential impacts on users and ensures systems enhance, rather than undermine, human control.
  • Bias and harm mitigation: Identifies and addresses risks of bias or harm in system design and data usage.
 
 
2. When something significant changes: If the scope, operational environment, or functionality of the AI system changes significantly, the risk profile must be reassessed. This allows the MOD to address emerging risks, such as unintended system behaviour, evolving biases, or impacts on operational contexts. At this crucial point, we must adapt the ethical risk framework to address new challenges and maintain alignment with operational needs. Example of material changes could include: a shift in the AI’s application from a non-operational to an operational setting, updates that significantly alter system functionality or introduce new learning capabilities, deployment in a new operational environment with differing legal or ethical implications. Re-alignment with the Ethics Principles at this stage is key. For example: 
  • Reliability: Confirms that the system remains robust and adaptable to changes without compromising safety or performance.
  • Understanding: Ensures that new or modified system functionalities are interpretable and transparent to relevant stakeholders.
 
 
3. As part of on-going review: Ongoing reviews are crucial to managing latent risks, addressing system behaviour changes, and ensuring continued compliance with MOD standards. It provides continuous oversight to detect and address ethical risks throughout the lifecycle of the system. This may include: monitoring, reassessment, or risk escalation. Ongoing alignment with the Ethics principles is key. For example: 
  • Human centricity: Keeps human operators informed and empowered to intervene if the system operates outside acceptable ethical bounds.
  • Responsibility: Ensures accountability is maintained throughout the lifecycle, with evidence-based decisions recorded and traceable.
  • Bias and harm mitigation: Proactively addresses biases or harms that may emerge over time, and considering the mechanisms by which these can be gathered, analysed and actioned.
  • Reliability: Monitors system robustness to ensure long-term operational integrity even when exposed to unanticipated contexts.
  • Understanding: Provides transparent, ongoing communication about system behaviour and any identified risks.

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.