Understanding for AI in UK Defence
Understanding is a foundational principle for the ethical development and use of AI in UK Defence. It ensures that all relevant individuals—developers, operators, decision-makers, and other stakeholders—possess a sufficient level of comprehension of AI-enabled systems and their outputs. This principle is vital for enabling responsible, effective, and ethical decision-making throughout the lifecycle of AI systems, from their development and deployment to their eventual retirement. It incorporates transparency, explainability, and context-specific training while balancing operational needs with ethical and legal considerations. Its official definition is the following:
Definition of Understanding:
AI-enabled systems, and their outputs, must be appropriately understood by relevant individuals, with mechanisms to enable this understanding made an explicit part of system design.Effective and ethical decision-making in Defence, from the frontline of combat to back-office operations, is always underpinned by appropriate understanding of context by those making decisions. Defence personnel must have an appropriate, context-specific understanding of the AI-enabled systems they operate and work alongside.
This level of understanding will naturally differ depending on the knowledge required to act ethically in a given role and with a given system. It may include an understanding of the general characteristics, benefits and limitations of AI systems. It may require knowledge of a system’s purposes and correct environment for use, including scenarios where a system should not be deployed or used. It may also demand an understanding of system performance and potential fail states. Our people must be suitably trained and competent to operate or understand these tools.
To enable this understanding, we must be able to verify that our AI-enabled systems work as intended. While the ‘black box’ nature of some machine learning systems means that they are difficult to fully explain, we must be able to audit either the systems or their outputs to a level that satisfies those who are duly and formally responsible and accountable. Mechanisms to interpret and understand our systems must be a crucial and explicit part of system design across the entire lifecycle.
This requirement for context-specific understanding based on technically understandable systems must also reach beyond the MOD, to commercial suppliers, allied forces and civilians. Whilst absolute transparency as to the workings of each AI-enabled system is neither desirable nor practicable, public consent and collaboration depend on context-specific shared understanding. What our systems do, how we intend to use them, and our processes for ensuring beneficial outcomes result from their use should be as transparent as possible, within the necessary constraints of the national security context.
From:
Ambitious, safe, responsible: our approach to the delivery of AI-enabled capability in Defence, published 15 June 2022.
1. Who needs to understand what?
Understanding is in large driven by a combination of appropriate levels of transparency, and explainability. AI-enabled systems must provide clarity in their functionality, purpose, and outputs. This requires transparency about how systems operate, tailored to the audience's level of technical expertise. Transparency does not necessitate revealing all details—especially where national security, privacy, or intellectual property constraints apply—but rather ensuring stakeholders can meaningfully engage with the system's outputs and risks. Explainable AI (XAI) techniques should be employed to highlight the reasoning behind system behaviours and decisions. This in turn, helps to build trust with stakeholder and the wider public.
Explainability should focus on things like: clarifying why a system behaves a certain way or makes specific recommendations, communicating key influences (e.g., data inputs, algorithms) on system decisions, simplifying complex concepts to avoid overwhelming stakeholders while building trust. Alternative strategies, such as summarised risk assessments, should be developed for the operator for contexts where full transparency is impractical, perhaps because this would provide far too much information for the operator to be able to make sense of.
This raises some related questions:
2. Training and Education Programmes
To support role-specific needs in ethical decision-making, incorporating scenario-based learning that helps individuals navigate ethical dilemmas and understand potential fail states. A consideration for ongoing programmes should also be made - continuous education considerations are vital in enabling personnel to adapt to evolving AI capabilities and/or shifting operational contexts. Additionally, without appropriate education and training, it is impossible to enable individuals to identify and respond effectively to system malfunctions or for them to be able to address issues proactively while maintaining ethical standards. A considered, targeted approach fosters competence and confidence in managing the unique challenges associated with specific roles.
During the design phase, interpretability and explainability should be embedded from the outset to create systems that are transparent and comprehensible. In the deployment stage, operators must be trained to manage, monitor, and address system behaviours effectively. Finally, during decommissioning, it is crucial to ensure the safe retirement of systems and facilitate the transfer of knowledge to preserve institutional understanding. This holistic approach ensures that understanding and training are seamlessly aligned with every stage of the AI lifecycle.
Public engagement is a form of education and is essential for fostering trust in the use of AI within Defence. Wherever possible, transparent communication should emphasise the intended purposes and benefits of the systems, demonstrating their value and alignment with societal interests (see Human Centricity). Additionally, clear explanations of the safeguards in place to ensure ethical use are crucial in reassuring the public. Defence should also outline the processes established to address potential risks and unintended outcomes, highlighting accountability and a commitment to responsible AI deployment. This openness helps build confidence and promotes a shared understanding of the ethical frameworks guiding AI use.