Principles into Practice 15 15 15 15
What is meant by “an accountability gap”?
Filed under:
Responsibility

“The UK recognises that some states and civil society are calling for new legally binding rules on the basis that weapons with autonomous functions will introduce new elements to the battlefield not covered by existing IHL. However, the UK believes that there is no gap in the application of IHL in respect to autonomy in weapons systems. Existing IHL already regulates states in their development and procurement of weapons, and methods and means of warfare – including those with advanced technologies. It is a technologically-agnostic, robust and flexible legal regime for the regulation of armed conflict.” Input to UN Secretary-General’s Report on Lethal Autonomous Weapons Systems (LAWS) 

  1. The dog handler analogy: maintaining responsibility when direct control is not possible 
  2. How does the analogy apply in practise?  
  3. How can I mitigate the responsibility gap?  
 
 
1. The dog handler analogy
  • The "dog owner analogy" in Deane Baker's Should We Ban Killer Robots? is a useful way of clarifying questions of control and accountability when AI-enabled systems in general, or lethal autonomous weapons systems (LAWS) in particular, make decisions independently. 
  • Baker likens the relationship between a human and an autonomous weapon to that of a dog owner and their dog. A dog owner is responsible for training and controlling the dog, but if the dog unexpectedly attacks someone, the owner is still held accountable for the harm caused. Similarly, an operator or developer of an autonomous weapon system should remain responsible for its actions, even if the system operates independently within its programmed parameters. 
The analogy illustrates how responsibility can be maintained even when direct control is not possible. Just as dog owners are expected to ensure their pets are well-trained and restrained, developers and users of LAWS should ensure these systems are designed and deployed responsibly. While a dog might occasionally behave unpredictably, the owner is expected to mitigate risks through proper training and care. Similarly, future LAWS may act autonomously, but they must be designed with safeguards to minimize unpredictable outcomes. The analogy highlights the importance of defining who is accountable for an autonomous agent's actions, reinforcing that creators and operators must take ultimate responsibility for their tools, regardless of any autonomy. 
  
 
2. How does the analogy apply in practise?
  • If you are asked to dog-sit for a new neighbour that you don’t know well. They leave a note about what Fido eats and when Fido likes to be fed, along with its favourite toy. Is that enough? 
  • What do you need to know to be able to look after the animal? 
  • Even if you didn’t know a lot more about the animal, you would still be accountable if the dog did something bad or wrong as you are the one in charge of it. However, that accountability would be shared with your neighbour if they didn’t explain something important about the dog’s aversion to postmen, or that it must never be fed after midnight etc. 
  • What are the appropriate training requirements? Understanding What does the operator/handler need to know in order to be able to deploy the capability in a responsible way? How comprehensive does the training manual need to be, or will only a hands-on, in-depth train-the-trainer suffice? 
Whether we are talking about a neighbour’s dog, a police animal or a combat assault dog working with a Special Operations Forces team, there are no circumstances in which there is an issue with accountability. For example, even if an animal “goes rogue” and causes undue harm, the owner or the handler will be held accountable. There may well be questions to put to the trainer, or even the breeder to determine if they contributed in some way to the situation. While the animal may end up being destroyed as a consequence, there is ‘no sense in which this is a punishment for the dog. Rather it is the relevant human who is held accountable, while the dog is killed as a matter of public safety’. 
This analogy can help us think about many other areas of an AI-system’s operation and capabilities in terms of responsibility and accountability. For example, how much freedom is it appropriate to grant in different situations? Bounded autonomy for a system could be geographic like a walled exercise area, or it could be that the operating area is not specified, but the handler maintains control through a short leash. In some cases, a longer leash can be permitted, but the lead can still subsequently be shortened if the situation changes or other factors need to to be taken into account. The analogy also emphasises the role of trust. 
 
 
3. How can I mitigate the responsibility gap?
Questions that you may need to consider: 
    • How are responsibility gaps avoided when AI systems are composed of multiple components or involve multiple stakeholders? 
    • What processes ensure that every aspect of the system has a clearly defined risk and accountability owner? 
    • Are contingency plans in place to address scenarios where accountability is unclear (e.g., system failures, emergent behaviours)? 
    • How is responsibility managed in situations where AI decisions are based on probabilistic or uncertain outputs? 
 
Potential Tensions: 
    • Managing overlapping responsibilities in complex, multi-stakeholder systems. 
    • Ensuring clear accountability for AI systems with emergent or probabilistic behaviours. 
    • Balancing rapid deployment needs with robust accountability structures. 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.