Ethics Foundations 3 3 3 3
What is consequentialism - can the ends justify the means in certain situations?
Filed under:
Understanding
Bias and Harm Mitigation

When grappling with the ethical question of whether the ends justify the means, the teachings of utilitarianism, a consequentialist ethical theory, provides a structured lens through which to explore this issue. In the world of AI development, utilitarianism, a consequentialist ethical theory, provides a valuable framework for analysing some kinds of dilemmas. Below, we apply utilitarian principles to AI development and highlight the considerations relevant to creating ethical AI systems.

1.      What is Utilitarianism and what does it teach us about the ends and the means?
2.      What are the advantages of utilitarian thinking?
3.      Can the ends justify the means?
 
Case Study 1 
The submarine HMS Sinky has a fire on board, and Alex the Sailor is sent into the compartment to extinguish it. Unfortunately, Alex is unsuccessful and the fire is about to spread to the next compartment in which 5 sailors are trapped. If it does so, those sailors will die. Being an older vessel, HMS Sinky has a halon gas fire suppression system that is still operational, but it needs to be activated right now if it is to be effective. It is so noisy that Alex cannot hear the order to leave the compartment, and Alex has no effective gear that will protect them from the gas. Do you activate the halon fire suppression system to save the 5 sailors trapped below decks, knowing it will also kill Alex? If your answer is yes, it is likely to be informed by the calculation that it is better to save 5 lives than just 1, and Utilitarianisms would agree with you.
 
HMS Sinky
  

1. What is utilitarianism and what does it teach us about the ends and the means? 
Core Principle: Utilitarianism asserts that the moral value of an action lies in its consequences. The right action is the one that maximises overall happiness and minimises suffering. In utilitarian terms, the morality of AI development lies in its outcomes. The “right” AI system is one that maximises overall benefits (e.g., social well-being, safety, efficiency) while minimising harms (e.g., privacy violations, bias, job loss).

2. Advantages and limitations of utilitarian thinking
Utilitarianism highlights how ethical decision-making can prioritise the greater good, offering a rational approach to challenging scenarios. It provides a clear, outcome-focused approach. 
  • Fairness: Every individual's happiness is weighed equally, so on the face of it, no groups should be unfairly neglected or prioritised – it doesn’t matter if you are rich or poor, one of “us” or one of “them”.
  • Practicality: Decisions are grounded in outcomes that maximise well-being. Encourages developers to prioritise AI systems that provide the greatest societal benefit (e.g., healthcare diagnostics that save the maximum number of lives).
  • Universality: Its principles apply across cultures, focusing on universal concepts like pleasure and pain. 


3. Can the ends justify the means? 
Case study 2 
The submarine HMS Sinky is on patrol far from home or any hospital. A terrible accident onboard leaves 5 sailors with critical injuries. Fortunately, the boat has an excellent surgeon and Charlie the Sailor is a perfect tissue match for the 5 horribly injured personnel. If the surgeon operates immediately, they could save 5 lives, but it would be at the expense of Charlie’s life. Should the Captain order Charlie to be sacrificed? 
Although the situation is different, in some ways it is asking the same question as the earlier case study – should you prioritise the many over the few? That would be utilitarian logic. However, most people would also think that there appears to be something different about this situation…but what is it?
 
The sick bay on HMS Sinky

While unrefined utilitarianism often provides clear guidance, it also faces significant challenges, and these would be replicated in any AI system that is heavily informed by it. An unqualified or unrefined version of consequentialism simply wants to know “what is at stake?”, and this could be used to justify absolutely anything you like:
  • Counterintuitive results: Utilitarian reasoning can lead to morally troubling conclusions, such as sacrificing an innocent person (e.g., Charlie the Sailor) for the greater good. Utilitarian logic might justify harmful means, such as using biased datasets to achieve broader functionality, which conflicts with principles of fairness.
  • Ethical slippery slopes: By focusing solely on outcomes, it risks justifying actions like torture or non-consensual medical experiments if they serve a larger goal. Utilitarianism might justify invasive practices (e.g., surveillance) if perceived societal benefits outweigh privacy violations.
  • Epistemic uncertainty:  Epistemology is concerned with what we know and how we know it. How can we predict future outcomes with certainty? Mistakes in these predictions can lead to harmful decisions (especially as it is natural to overemphasise positives at the expense of negatives). Epistemic uncertainty is a significant challenge when predicting AI’s consequences.
While utilitarian principles are useful, they often require supplementation by other ethical frameworks to ensure the outcomes do not become distorted. As we will see below (see cards: What is deontology - are some actions just always right or wrong?, What is Virtue Ethics and why is it important?, How do you make a good ethical decision?) incorporating principles from deontology (focusing on rights and duties) or virtue ethics (focusing on character and intentions) helps mitigate the risks of utilitarian overreach.

AI was used to generate the illustrations in this card.

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.