The UK's AI Principles 4 4 4 4
What legal considerations must I be aware of when assessing and managing ethical risk?
Filed under:
Human Centricity
Responsibility
Understanding
Bias and Harm Mitigation
Reliability

When assessing and managing ethical risks in AI-enabled capabilities within the MOD, legal considerations are crucial to ensuring compliance with national and international legal frameworks. The MOD upholds these laws through established processes for seeking and applying legal advice throughout the AI project lifecycle. Below are key legal considerations:
  1. How can individuals and organisations ensure adherence to national and international law?
  2. How do the MOD AI Ethical Principles help us shape our legal understanding of AI?
  3. When and why should we seek legal advice during AI projects?
  4. How are legal risks managed in novel or contentious AI applications?

1. How can individuals and organisations ensure adherence to national and international law?
The MOD ensures that AI projects comply with a comprehensive range of legal obligations. These are extensive and include such things as:
National laws:
  • Employment law: Protects the rights of personnel involved in AI development and use.
  • Data Protection Act 2018 (UK GDPR): Governs lawful, transparent, and secure data handling.
  • Procurement law: Guarantees the fair and legal acquisition of AI technologies.
International laws:
  • International Humanitarian Law (IHL): is a robust and international agreed legal framework governing the use of force. It is also technologically agnostic. As such, it governs the use of AI in conflict, ensuring compliance with principles of distinction (between combatants and civilians), proportionality, and necessity.
  • Article 36 Review: refers to obligations under Additional Protocol I to the Geneva Conventions (1977) to ensure that any weapon, means or method of warfare complies with international treaties.
  • Cross border data regulations: Resolves conflicts between UK data protection laws and international privacy obligations, especially in multinational operations.


2. How do the MOD AI Ethical Principles help us shape our legal understanding of AI?
  • Human Centricity: Ensures controls are designed with human well-being and ethical use in mind, requiring appropriate human oversight and respect for human rights laws.
  • Responsibility: Embeds accountability and oversight at every stage of development. Ensuring lawful use of personal and operational data in accordance with UK GDPR and international data protection laws.
  • Understanding: Promotes transparency to meet legal explainability requirements. Legal compliance measures must be clearly communicated to developers, operators, and end-users to ensure understanding and adherence.
  • Bias and Harm Mitigation: Aligns with anti-discrimination laws to reduce bias and ensure fairness, including looking forwards to align (or ability to align) with new laws. Applying safeguards to prevent misuse, particularly in international or cross-jurisdictional contexts where laws may conflict.
  • Reliability: Ensures systems are robust and meet operational and legal standards now, including the reasonable ability to adapt and comply to future changes.

3. When and why should we seek legal advice during AI projects?
Seeking timely legal advice is essential for ensuring compliance and managing ethical risks. Individuals involved in AI projects, such as developers and team leads, should consult legal experts at project initiation to establish guidelines around intellectual property, data usage, and operational compliance. Throughout the development and deployment stages, legal reviews should be sought when new risks arise, or significant changes occur. Developers are responsible for obtaining their own legal advice concerning product development, particularly around data handling and intellectual property rights, while Defence-related queries should be directed to MOD Legal Advisers (MODLA). Proactively seeking legal advice demonstrates responsibility, promotes understanding of legal frameworks, and ensures systems are robust and reliable.

Many areas of activity have their own specialist policies and legal requirements to be able to operate. This ranges from the Civil Aviation Authority regulations to ensure safety and privacy governing the use of drones in UK airspace, covering who can fly what, where, and how, through to the extensive rules covering the movement of blood plasma provided by the Blood Safety and Quality Regulations 2005, covering storage and transport conditions, traceability, and labelling band tracking. The UK Armed Forces and Security Services operate within specific legal frameworks meaning, for example, that there are limits on what kind of personal or sensitive data they are permitted to access on UK citizens. The Regulation of Investigatory Powers Act 2000 (RIPA) and its successor, the Investigatory Powers Act 2016 (IPA), set out clear limits and oversight mechanisms for the interception, collection, and use of data. These laws require that any access to personal or sensitive information must be proportionate, necessary, and authorised by appropriate legal and judicial oversight.  Focusing on a technical solution without appreciating the wider legal context will inevitably get your project into trouble when the constraints are finally run into.

4. How are legal risks managed in novel or contentious AI applications?
Some AI applications, such as those involving kinetic effects or novel operational contexts, require heightened legal scrutiny. The MOD mandates additional oversight to manage the unique risks associated with:
  • Compliance with international arms agreements: For example, ensuring adherence to treaties such as the Convention on Certain Conventional Weapons (CCW).
  • Prohibition of fully autonomous lethal systems: Ensuring that human control remains central in systems that could cause harm.
Projects involving significant ethical or legal risks must be escalated to senior oversight bodies, such as the Joint Requirements Oversight Committee (JROC) or Defence Ministers, for review and approval.

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.