Principles into Practice 10 10 10 10
What does having a person “in the loop” actually mean?
Filed under:
Responsibility

Although it does not appear in UK (or US) formal doctrine, the language of in the loop is often referred to when considering matters of human control (clearly stated in the Responsibility Principle) of an AI-enabled system. A system described as “human in-the-loop” suggests that the degree of human control over the AI-enabled system is clear and unambiguous. For example:

Human in-the-loop: Humans are actively involved in the decision-making at every stage. The system can operate semi-autonomously, but requires human input at every stage, E.g. the pilot uses the AI system to identify a target and then decides to engage, explicitly authorising each stage of the process.
On-the-loop: A human monitors the system and can intervene if necessary but is not directly controlling it in real-time. This is a hierarchical control structure in the form of the person being in a supervisory role with the capability to monitor and intervene/override.
Over-the-loop: The human operator has strategic oversight but lacks granular control. Closely related to on-the-loop, but may have less ability to intervene, or less information on which to base an intervention decision on.
Beside-the-loop: A human works in tandem or partnership with an AI-enabled system, with inputs from both combining to achieve a goal collaboratively. For example, human-AI teaming in a command-and-control scenario where strategic guidance from the human is then interpreted and executed autonomously.
Out-of-the-loop: The system operates fully autonomously without real-time human input, involvement or oversight. May be highly efficient, but it limits the degree of ethical oversight possible.
After-the-loop: is where the human input is focused on review and feedback post-operation.

Note that some of these descriptions may need to be combined if they are to be useful. More importantly however, in practice, what these definitions actually mean depends on how narrowly or broadly the loop is defined. The “in the loop” language is ambiguous in that many who use it fail to define precisely which loop is under consideration, meaning there is a critical need for clarity:

• The term "loop" can refer to different stages or levels of control, ranging from high-level strategic decisions to granular, real-time operational tasks.
• Without explicitly defining the loop under discussion, any claims about human involvement or autonomy risk being ambiguous or misleading.
• In practice, the level of human involvement can change dynamically during an operation, making rigid classifications less useful.

It is a matter of UK policy that ‘The UK does not possess fully autonomous weapon systems and is not developing them. Operation of our weapons will always be under human control as an absolute guarantee of human oversight and authority, and of accountability for weapons usage.’ Therefore, understanding exactly where and when that human control starts and finishes is essential.

------------------------------------------------------------------------

Consider the “legacy” weapons technology: the advanced medium range air-to-air missile (AMRAAM) that entered service in 1991. When a pilot employs this missile, they first use their aircraft’s radar to identify an airborne “track” and determine through various means that the track is “hostile.” The pilot releases the missile and then the missile takes over. It flies to a pre-determined volume of space and emits energy with its own onboard radar. It picks up the “track” and then it calculates the best intercept geometry to target and destroy the hostile track.

Is the AMRAAM a lethal autonomous weapons system? Is there a human in the loop? Well, it depends entirely upon where we draw the loop, which can either refer to the pilot's decision to launch the missile (human in-the-loop) or the missile's autonomous guidance to the target (human out-of-the-loop).

------------------------------------------------------------------------

In general, unless we are careful to define which loop we have in mind, saying that some system is “human in-the-loop”, “on-the-loop”, “beside the loop” or “out-of-the-loop” is deeply ambiguous. Where the loop is has implications for oversight and responsibility.

Given the challenges of defining what we mean by “in-the-loop”, perhaps it is not surprising that at least as far as US policy goes, the DoD has not committed itself always to having a human in-the-loop in autonomous weapons systems using this kind of language. However, there is one exception - in all of DoD strategy and policy, there is only one mention of a human in-the-loop. Whether that is reassuring or not, the US will always have a human in the loop when it comes to nuclear weapons. The 2022 Nuclear Posture Review states: “In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decision by the President to initiate and terminate nuclear weapon employment.” This highlights some of the challenges of AI when thinking about the ad bellum level of war.

It is also essential to consider human control throughout the lifecycle of any weapon system, from the political control the precedes and underpins the whole process, research and development, testing and evaluation, deployment, and actual use. One need to look at the practical activities that occur throughout the lifecycle of a weapon system and how they collectively contribute towards human control over weapon systems and compliance with International Humanitarian Law.

See card: What is the difference between "meaningful human control" and "appropriate human control"? 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.