In the context of artificial intelligence (AI), "meaningful human control" (MHC) and "appropriate human control" (AHC) both describe the necessity of human oversight over AI systems, but they differ in scope and application. While both concepts emphasise the importance of human involvement, MHC focuses on the quality and depth of control, while AHC tailors the degree of oversight to specific use cases. While both concepts advocate for human oversight:
Meaningful human control focuses on ensuring substantial and effective control across all applications,
Whereas appropriate human control adjusts the degree of control based on the context.
Understanding these distinctions is critical for developing AI systems that are both effective and ethically sound, ensuring human oversight is applied appropriately across diverse scenarios.
1. Meaningful human control
Refers to a framework ensuring that AI systems operate under human oversight in a way that allows for accountability and ethical responsibility. It emphasises that humans should have the ability to understand, monitor, and, if necessary, intervene in the operations of AI systems. Key aspects of MHC include predictability, ensuring the AI behaves in a foreseeable manner; transparency, making AI decision-making accessible and comprehensible; and intervention capability, allowing humans to override or alter AI actions when needed. This concept is particularly significant in high-stakes domains like autonomous weapons and healthcare, where the consequences of AI actions can be profound. For example, MHC in autonomous weapons ensures that human operators can supervise and control deployment and engagement decisions, maintaining ethical and legal accountability (Article 36, 2016; Crootof, 2016).
2. Appropriate human control
On the other hand, appropriate human control focuses on tailoring the level of human oversight to the specific context and function of the AI system. It recognises that different applications of AI may require varying degrees of human involvement. The appropriateness of control is determined by factors such as potential risks, the criticality of decisions made by the AI, and the system’s reliability. Key considerations include contextual relevance, aligning control with the specific use case; risk assessment, increasing oversight for higher-risk applications; and system reliability, allowing more autonomous systems with proven reliability to require less direct human control. For instance, in medical diagnostics, AHC involves ensuring that AI tools assist clinicians without replacing their judgment, maintaining a balance that prioritises patient safety (Roff, 2021).
Article 36. (2016). "Meaningful Human Control in the Use of Autonomous Weapons Systems." Retrieved from [Article 36 website](https://article36.org).
Crootof, R. (2016). "The Killer Robots Are Here: Legal and Policy Implications." Cardozo Law Review, 36(1), 1837–1916.
Roff, H. M. (2021). Advancing Human Control over Autonomous Systems: Trust, Context, and Responsibility. University of Oxford Press.