When harm seems inevitable in a given situation, the Doctrine of Double Effect offers a framework for considering moral responsibility in complex ethical dilemmas.
The Doctrine of Double Effect was originally formulated by the philosopher Thomas Aquinas. It applies to scenarios where actions aimed at achieving a good outcome may also lead to harmful side effects. The doctrine provides a means to judge when it is morally acceptable to proceed with such actions.
This concept has contemporary relevance, particularly in fields like artificial intelligence, where systems may need to balance competing interests. Examples include autonomous vehicles, military technologies, and content moderation algorithms.
2. Key principles of the Doctrine
Intent matters: For an action to be morally acceptable, harm must not be the intended goal but rather an unintended side effect of pursuing a legitimate objective. For example: in military operations, targeting an enemy's resources may unintentionally result in civilian casualties. While the harm maybe foreseeable, it is not the intended outcome so might be acceptable (if the other factors below are also satisfied). Harmful consequences must not be the intended outcome of a system's decisions. These harms should be unintended byproducts of fulfilling the system's primary purpose.
Proportionality: The harm caused must be proportionate to the good achieved. For example, an AI system designed to reduce misinformation must carefully weigh the harm of false positives and the suppression of truthful content, against the societal benefit of limiting misinformation.
Distinction: There must be a clear distinction between legitimate and illegitimate targets. For example, in war, combatants are legitimate targets, whereas civilians are not. Actions failing to respect this distinction are unethical and illegal. In AI: Developers must ensure systems can differentiate between harmful and legitimate targets.
The Doctrine of Double Effect can help us navigate moral grey areas where both action and inaction can have serious consequences. It serves as a reminder that ethical reasoning is not only about what we achieve, but also about how we achieve it and at what cost to others.
This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context.
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.