Many people have a strong belief that some actions are quite clearly right, or manifestly wrong, and this determination doesn’t change because the situation varies, the stakes are higher or lower or the people involved are important or powerful individuals.
Deontological ethical reasoning agrees with this belief and argues that certain actions are inherently right or wrong, irrespective of the context or consequences. Rooted in principles of duty and universal moral laws, this approach emphasises the intrinsic morality of actions rather than their outcomes. In this framework, actions such as murder, torture, or lying are considered universally wrong because they violate fundamental human rights or moral principles. In the realm of AI development, the ethical principles governing human actions take on heightened importance. Deontological ethics, which asserts that certain actions are inherently right or wrong regardless of context, offers valuable insights for creating AI systems that interact with humans and make decisions. These principles focus on the morality of the action itself, emphasising universal rights and duties, making it particularly relevant for establishing ethical AI frameworks.
1. What are the core principles of deontological reasoning?
2. How can we apply deontology to ethical decision making?
3. What are the challenges of deontological reasoning?
4. How can we balance deontological and consequentialist approaches?
1. Core principles of deontological reasoning- Universality: Moral rules apply universally, without exception. They are not contingent on cultural, regional, or situational factors. In AI, this could translate into hard-coded ethical constraints, such as ensuring that autonomous systems never engage in actions deemed universally harmful—such as discrimination or privacy violations—regardless of potential benefits.
- Focus on the action itself: Deontological ethics evaluates the morality of actions based on whether they align with duties and principles, rather than outcomes. AI systems must evaluate the morality of their decisions according to predefined ethical standards rather than purely on outcomes. For example, an AI deciding whether to prioritise efficiency over fairness should adhere to fairness as an absolute principle.
- Respect for rights: Deontological ethics emphasises that certain rights—such as the right to life, liberty, and freedom from torture—are absolute and cannot be overridden, even for greater societal benefits. AI must uphold universal human rights such as privacy, freedom from harm, and autonomy.
2. Application of deontology in ethical decision-makingAbsolute moral duties: Actions such as murder are always wrong, irrespective of the potential benefits to others. For instance, deliberately killing an intruder who surrenders during a military operation violates their right to life under the principle of hors de combat. Many of these absolute duties are also codified into legal frameworks.
Human rights as red lines: Rights such as freedom of speech and parental rights are upheld as absolutes but are limited when they harm others (e.g., inciting violence or abusing children).
Practical contexts: In military scenarios, deontology offers guidance by prohibiting deliberate harm to civilians while allowing defensive harm if someone poses an imminent threat
Conflict resolution: Deontological reasoning can guide AI in navigating ethical dilemmas where conflicting rights arise. For instance:
- Should an AI prioritise the right to free speech over the need to prevent hate speech?
- How should a self-driving car resolve conflicts between the safety of passengers and pedestrians?
3. Challenges of deontological reasoning
Conflicting rights: When rights conflict, such as balancing freedom of speech with public safety, deontology requires nuanced judgment. It is likely that AI systems will frequently encounter scenarios where rights conflict. For example:
- An AI moderation system might face a conflict between protecting free expression and ensuring user safety from harmful content.
- In healthcare, an AI might need to balance patient confidentiality against the need to alert authorities about a contagious disease.
Rigid principles: Absolute rules, like Kant’s assertion that one must always tell the truth—even under threat—can lead to impractical or harmful outcomes. For example, in a military context, there are forms of legitimate deception to mislead an opponent that are forms of lying but are surely acceptable in war? Hard-coding deontological principles can lead to inflexibility in dynamic contexts. For instance, an AI respecting autonomy and refusing to manipulate user behaviour under any circumstances might fail to intervene when such actions could prevent harm (e.g., discouraging suicidal ideation).
Ignoring consequences: Solely focusing on the morality of actions without considering outcomes can result in significant harm. For example, not warning civilians about an impending military strike to uphold operational secrecy might lead to avoidable casualties.
4. Balancing deontological and consequentialist approaches
While deontology emphasises the inviolability of rights and justice, consequentialist ethics considers the outcomes of actions. Balancing these perspectives allows for more practical decision-making in complex scenarios (and this can be even more effective when combined with virtue ethics considerations as well - see cards:
What is Virtue Ethics and why is it important?, How do you make a good ethical decision?).