The UK's AI Principles 6 6 6 6
What does Responsibility mean in the context of AI development for UK Defence?
Filed under:
Responsibility

Responsibility for AI UK Defence

Responsibility, the second principle of the Ambitious Safe and Responsible policy paper, centres on ensuring clear accountability for the outcomes of AI-enabled systems, while maintaining human control throughout their lifecycle. The key thing to consider is that humans, as moral agents must always remain ultimately accountable for the ethical and lawful use of AI in Defence by establishing clearly defined lines of human control, accountability, and risk ownership, ensuring effective management and governance throughout the system's use.

Definition of Responsibility: 
Human responsibility for AI-enabled systems must be clearly established, ensuring accountability for their outcomes, with clearly defined means by which human control is exercised throughout their lifecycles.

The increased speed, complexity and automation of AI-enabled systems may complicate our understanding of pre-existing concepts of human control, responsibility and accountability. This may occur through the sorting and filtering of information presented to decision-makers, the automation of previously human-led processes, or processes by which AI-enabled systems learn and evolve after their initial deployment. Nevertheless, as unique moral agents, humans must always be responsible for the ethical use of AI in Defence.

Human responsibility for the use of AI-enabled systems in Defence must be underpinned by a clear and consistent articulation of the means by which human control is exercised, and the nature and limitations of that control. While the level of human control will vary according to the context and capabilities of each AI-enabled system, the ability to exercise human judgement over their outcomes is essential.

Irrespective of the use case, Responsibility for each element of an AI-enabled system, and an articulation of risk ownership, must be clearly defined from development, through deployment – including redeployment in new contexts – to decommissioning. This includes cases where systems are complex amalgamations of AI and non-AI components, from multiple different suppliers. In this way, certain aspects of responsibility may reach beyond the team deploying a particular system, to other functions within the MOD, or beyond, to the third parties which build or integrate AI-enabled systems for Defence.

Collectively, these articulations of human control, responsibility and risk ownership must enable clear accountability for the outcomes of any AI-enabled system in Defence. There must be no deployment or use without clear lines of responsibility and accountability, which should not be accepted by the designated duty holder unless they are satisfied that they can exercise control commensurate with the various risks.

Key areas for assigning Responsibility to AI enabled systems (See card: Responsibility vs Accountability, what is the difference?)

1. Avoiding any gaps in responsibility 
2. Clearly defining human control 

1. Avoiding any gaps in responsibility

First and foremost, unambiguous lines of accountability must be established. This involves creating a clear chain of responsibility that extends across all stages of the product’s lifecycle, from development through deployment and operation. Without such clarity, gaps in decision-making processes can arise, leading to failures in oversight. Every stakeholder’s role should be explicitly articulated to avoid ambiguity.

Consent is another crucial area requiring attention. See It is essential to determine whose consent is needed for specific activities, what questions need to be asked, and who is responsible for collecting and managing this information. Systems must respect individual autonomy where applicable, ensuring that consent is sought in a transparent and structured manner. Furthermore, consent should not always be treated as static; in dynamic relationships, such as those involving human-machine teaming, it may need to be periodically revisited and updated. A clear process for managing and updating consent is necessary, with designated accountability for this task. (See card: What does "informed consent" mean?)

The intended purpose and use of AI-enabled systems must also be clearly defined and communicated to all stakeholders. This helps avoid confusion about the system’s capabilities and prevents gaps in responsibility by ensuring everyone understands the system’s scope and limitations.

Trackability and traceability are fundamental features of AI systems that enable accountability. Mechanisms should be in place to trace decisions and outcomes back to responsible individuals or teams. This not only supports accountability but also ensures transparency in the event of disputes or failures.

Continuous risk assessment is another vital component. Systems must be designed to capture and analyse data from their operation, enabling the identification of changes in assumptions, regulatory requirements, or policy environments. A dedicated person or team should be tasked with managing this process, ensuring that evolving risks are recognised and addressed.

Finally, the articulation of risk ownership is essential. This includes determining and assigning accountability not only within the organisation but also to third-party suppliers, who must comply with ethical and legal standards. Each element of the system, from data inputs to operational deployment, must have clearly defined risk ownership. Moreover, questions about responsibility for actions performed under the guidance or influence of AI must be addressed. It is important to clarify the extent to which individuals remain responsible for their actions when AI plays a role in decision-making.

By addressing these considerations, organisations can minimise gaps in responsibility, ensuring that accountability is robust, transparent, and consistent across all aspects of AI system development and operation.

(See card: What is meant by “an accountability gap”?)

2. Clearly defining human control

Determining the scope and limits of human control over AI systems is a challenging task, particularly in high-stakes or automated contexts. Ensuring that humans can maintain meaningful oversight is critical to the safe and effective operation of AI systems. However, this requires a clear articulation of control mechanisms and consistent application across different operational scenarios.

The scope of human control must be clearly defined to establish the extent and limits of human involvement. In high-stakes environments, operators need the ability to exercise meaningful judgement over AI-driven outcomes. This involves implementing well-defined control measures, such as activation and deactivation protocols, conditions for allowing autonomy in specific scenarios, override capabilities, and mechanisms for human intervention in critical situations. These measures ensure that humans can maintain oversight, even in situations where the AI operates autonomously.

Intervention mechanisms play a crucial role in maintaining meaningful oversight. AI systems must be designed to allow human operators to intervene or override decisions when necessary. This raises important questions about how these mechanisms function and what might trigger their activation. Effective intervention requires that operators have both the authority and the ability to act swiftly when needed.

Preserving human judgement is equally important, even in highly automated processes. Systems must be designed to ensure that operators can understand what the AI is doing and why it is doing it. See card: What does Understanding mean in the context of AI development for UK Defence?. Without this understanding, it becomes impossible for humans to exercise informed judgement. Transparency and explainability are therefore essential features of any AI system intended for critical applications.

Finally, training AI systems to incorporate human feedback is vital. AI should be designed to recognise when it needs to consult a human operator, particularly in situations of uncertainty. Training methodologies that teach AI to respond appropriately to human feedback can help ensure that the system complements, rather than replaces, human decision-making. This approach fosters a collaborative relationship between humans and AI, enhancing both safety and effectiveness.

(See card What is the difference between "meaningful human control" and "appropriate human control"?)
(See card What does having a person “in the loop” actually mean?)

Responsibility in the context of AI development for UK Defence requires a robust framework that ensures accountability, preserves human control, and clearly defines risk ownership throughout the lifecycle of AI-enabled systems. By embedding transparency, traceability, and human oversight into every stage, developers and stakeholders can ensure that Defence AI systems operate lawfully, ethically, and in alignment with human values. While the challenges posed by AI’s complexity are significant, maintaining clear accountability and governance safeguards trust and operational integrity in Defence applications.

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.