Ethical risk assessments are not a one-off activity, they are required at project initiation, when material changes occur, and as part of ongoing review processes. By integrating these assessments throughout the AI lifecycle and aligning them with the MOD’s AI Ethical Principles, Defence ensures AI-enabled systems are safe, effective, and ethically sound in dynamic operational and regulatory environments.
1. At project initiation
2. When something significant changes
3. As part of on-going review
1. At project initiation: Ethical risk assessments are essential at the beginning of a project to embed ethical considerations from the outset. This ensures AI systems are developed with a robust foundation of accountability, transparency, and alignment with MOD values. Alignment with the Ethics Principles at this stage is key as it establishes ethical, legal, and operational benchmarks before development begins. Fore example:
- Responsibility: Clearly defines oversight roles and accountability mechanisms early in the project.
- Human centricity: Evaluates potential impacts on users and ensures systems enhance, rather than undermine, human control.
- Bias and harm mitigation: Identifies and addresses risks of bias or harm in system design and data usage.
2. When something significant changes: If the scope, operational environment, or functionality of the AI system changes significantly, the risk profile must be reassessed. This allows the MOD to address emerging risks, such as unintended system behaviour, evolving biases, or impacts on operational contexts. At this crucial point, we must adapt the ethical risk framework to address new challenges and maintain alignment with operational needs. Example of material changes could include: a shift in the AI’s application from a non-operational to an operational setting, updates that significantly alter system functionality or introduce new learning capabilities, deployment in a new operational environment with differing legal or ethical implications. Re-alignment with the Ethics Principles at this stage is key. For example:
- Reliability: Confirms that the system remains robust and adaptable to changes without compromising safety or performance.
- Understanding: Ensures that new or modified system functionalities are interpretable and transparent to relevant stakeholders.
3. As part of on-going review: Ongoing reviews are crucial to managing latent risks, addressing system behaviour changes, and ensuring continued compliance with MOD standards. It provides continuous oversight to detect and address ethical risks throughout the lifecycle of the system. This may include: monitoring, reassessment, or risk escalation. Ongoing alignment with the Ethics principles is key. For example:
- Human centricity: Keeps human operators informed and empowered to intervene if the system operates outside acceptable ethical bounds.
- Responsibility: Ensures accountability is maintained throughout the lifecycle, with evidence-based decisions recorded and traceable.
- Bias and harm mitigation: Proactively addresses biases or harms that may emerge over time, and considering the mechanisms by which these can be gathered, analysed and actioned.
- Reliability: Monitors system robustness to ensure long-term operational integrity even when exposed to unanticipated contexts.
- Understanding: Provides transparent, ongoing communication about system behaviour and any identified risks.