Principles into Practice 12 12 12 12
What is the difference between "meaningful human control" and "appropriate human control"?
Filed under:
Responsibility

In the context of artificial intelligence (AI), "meaningful human control" (MHC) and "appropriate human control" (AHC) both describe the necessity of human oversight over AI systems, but they differ in scope and application. While both concepts emphasise the importance of human involvement, MHC focuses on the quality and depth of control, while AHC tailors the degree of oversight to specific use cases. While both concepts advocate for human oversight:
  1. Meaningful human control focuses on ensuring substantial and effective control across all applications,
  2. Whereas appropriate human control adjusts the degree of control based on the context. 
Understanding these distinctions is critical for developing AI systems that are both effective and ethically sound, ensuring human oversight is applied appropriately across diverse scenarios. 
  
 
1. Meaningful human control 
Refers to a framework ensuring that AI systems operate under human oversight in a way that allows for accountability and ethical responsibility. It emphasises that humans should have the ability to understand, monitor, and, if necessary, intervene in the operations of AI systems. Key aspects of MHC include predictability, ensuring the AI behaves in a foreseeable manner; transparency, making AI decision-making accessible and comprehensible; and intervention capability, allowing humans to override or alter AI actions when needed. This concept is particularly significant in high-stakes domains like autonomous weapons and healthcare, where the consequences of AI actions can be profound. For example, MHC in autonomous weapons ensures that human operators can supervise and control deployment and engagement decisions, maintaining ethical and legal accountability (Article 36, 2016; Crootof, 2016). 
  
 
2. Appropriate human control 
On the other hand, appropriate human control focuses on tailoring the level of human oversight to the specific context and function of the AI system. It recognises that different applications of AI may require varying degrees of human involvement. The appropriateness of control is determined by factors such as potential risks, the criticality of decisions made by the AI, and the system’s reliability. Key considerations include contextual relevance, aligning control with the specific use case; risk assessment, increasing oversight for higher-risk applications; and system reliability, allowing more autonomous systems with proven reliability to require less direct human control. For instance, in medical diagnostics, AHC involves ensuring that AI tools assist clinicians without replacing their judgment, maintaining a balance that prioritises patient safety (Roff, 2021). 
 
Article 36. (2016). "Meaningful Human Control in the Use of Autonomous Weapons Systems." Retrieved from [Article 36 website](https://article36.org). 
Crootof, R. (2016). "The Killer Robots Are Here: Legal and Policy Implications." Cardozo Law Review, 36(1), 1837–1916. 
Roff, H. M. (2021). Advancing Human Control over Autonomous Systems: Trust, Context, and Responsibility. University of Oxford Press. 

See card: What is meant by “an accountability gap”? 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.