Principles into Practice 11 11 11 11
What does “Meaningful Human Control” mean?
Filed under:
Responsibility

The pre-requisites for control: “A controls B if and only if the relations between A and B is such that A can drive B into whichever of B's normal range of states A wants B to be in” (Dennett 1984, 52). 
 
Which means an agent has: 
a)     intentions about the state or behaviour of that thing 
and 
b)    the ability to cause that thing to go into the intended state or behave in the intended way 
 
In Should We Ban Killer Robots? Deane Baker (2022) points out that epistemology is key to understanding what meaningful looks like in the context of CLAS, although the approach is helpful in wider contexts as well. To be able to exert control, one must know how to do so. If the user is unaware that a device has an on/off switch because it is not labelled as such, or it is not clear how to activate it, they cannot said to have control even if a different user with the knowledge of how to use that switch could absolutely be in control of the same system. 
 
1. The degree of control need not be absolute to be meaningful.
There are always factors that can impinge upon our ability to exert our will (sudden illness etc), but as long as these are not routine or expected, complete control over every potential variable need not be present. It is possible to have something approaching control, but that falls short of true meaningful control. For example, having an unmarked dial that allows you to control left or right, but not how far might give you “bare control”, but this would be unlikely to count as sufficient for meaningful control. 
As for accountability, while knowledge is still key, one does not need to have seen or even be aware of the outcome of an action for one to be responsible for it. For example, throwing a stone at a window but running away before it strikes does not affect the responsibility for the outcome, as long as it could have been easily predicted or even expected. 
 
2. Sufficient foreknowledge of potential outcomes is required for meaningful control. 
If one knows, or should have known (through effective due diligence etc.) that something was likely to occur, then one cannot claim that this was not the intended outcome, so therefore one is not accountable. If a well-trained and cared-for police dog acts out of character on a deployment due to an unexpected environmental change, that does not mean that the handler has done anything wrong. However, if the environmental change was entirely predictable, or if the training has not been done appropriately, or if the dog has been mistreated etc., then the handler might be entirely responsible and accountable for the outcome. 
The presence of an intermediate device/technology or system does not change the understanding above, or create an accountability gap. 
 
See card: What is the difference between "meaningful human control" and "appropriate human control"? 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.