Principles into Practice 9 9 9 9
What does “human flourishing” have to do with AI?
Filed under:
Human Centricity

Human flourishing refers to a state where individuals experience optimal well-being, functioning, and fulfilment across various aspects of life. It clearly resonates with the Human Centricity principle in the UK’s AI Ethics framework. Rooted in Aristotelian philosophy, the term "eudaimonia" is often translated as "human flourishing" and encompasses living in accordance with virtue and realizing one's potential.

  1.  What does “human flourishing” cover? 
  2.  What does this mean for AI in Defence? 
  3.  How can I incorporate such an idea into my AI-enabled system? 

What does “human flourishing” cover?
In contemporary discussions, human flourishing is viewed as a multifaceted concept that includes:
• Happiness and Life Satisfaction: Experiencing positive emotions and a sense of contentment with life.
• Health: Maintaining both mental and physical well-being.
• Meaning and Purpose: Engaging in activities that provide a sense of direction and significance.
• Character and Virtue: Cultivating personal strengths and moral excellence.
• Close Social Relationships: Building and sustaining deep, meaningful connections with others.

In positive psychology, flourishing is defined as "when people experience positive emotions, positive psychological functioning and positive social functioning, most of the time," living "within an optimal range of human functioning." Overall, human flourishing denotes a life in which suffering is minimized, and individuals fully express their essential human capacities, leading to a state of optimal functioning and well-being.

What does this mean for AI in Defence?
The adoption of artificial intelligence (AI) presents both opportunities and challenges for human flourishing. AI has the potential to enhance well-being by improving healthcare outcomes through advanced diagnostics, personalised treatments, and mental health support systems (Topol, 2019). It also boosts efficiency and productivity by automating repetitive tasks, allowing individuals to focus on more meaningful and creative work (Brynjolfsson & McAfee, 2014). AI-driven educational platforms can personalise learning experiences, fostering intellectual growth and improving access to knowledge (Luckin et al., 2016). Furthermore, AI can aid decision-making by providing data-driven insights and fostering social connections through translation tools and recommendation systems (Floridi & Cowls, 2019).

However, AI also poses significant risks to human flourishing, particularly when considered in a defence setting. The increasing reliance on AI can lead to privacy erosion, as data-driven algorithms collect and utilise personal information, raising concerns about surveillance and data security (Zuboff, 2019). Economic disruption is another concern, with AI-driven automation threatening to displace jobs, potentially exacerbating economic inequality and social unrest (Frey & Osborne, 2017). Additionally, AI systems can perpetuate societal biases if they are trained on biased data, reinforcing discrimination in hiring, lending, and law enforcement (O’Neil, 2016). The loss of human agency is another potential downside, as over-reliance on AI may diminish critical thinking and decision-making skills (Carr, 2014).

How can I incorporate such an idea into my AI-enabled system?
To ensure AI supports rather than hinders human flourishing, it is essential to develop ethical guidelines, promote inclusive policies, enhance accountability, and encourage public engagement (Floridi et al., 2018). By implementing responsible AI governance and ethical considerations such as those found in the UK MoD’s AI Ethics Principles, societies can harness AI’s potential for positive transformation while mitigating its risks.
A capability approach, drawn from the work of Martha Nussbaum, can be applied to provide useful ethical guidance for the responsible use of AI in contexts such as the military. Two tests, taken together, answer what is required to justify a particular application:

• Threshold Test: Does the technology reasonably protect people’s minimum capabilities? This covers a range of considerations, from health, sensory and emotional expression, through to the ability to reason and make one’s own choices.

Flourishing Test: Does the technology increase people’s capabilities and enable them to lead better lives as a result? The test relates to the abilities, hopes and desires of the individual, rather than seeing them as simply a tool that can be modified for the benefit of an organisation or 3rd party.

While clearly such a combination of tests need to be viewed through the lens of the military environment that the system will be deployed in, these questions provide a useful starting point to consider when developing your AI-enabled system.


- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
- Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.
- Carr, N. (2014). The Glass Cage: How Our Computers Are Changing Us. W. W. Norton & Company.
- Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review.
- Floridi, L., et al. (2018). AI4People: Ethical Guidelines for a Good AI Societ. Minds and Machines, 28(4), 689–707.
- Frey, C. B., & Osborne, M. A. (2017). The Future of Employment: How Susceptible Are Jobs to Computerisation? Technological Forecasting and Social Change, 114, 254–280.
- Jecker, N.S., Ko, A. The Unique and Practical Advantages of Applying A Capability Approach to Brain Computer Interface. Philos. Technol. 35, 101 (2022). 
- Luckin, R., et al. (2016). Intelligence Unleashed: An Argument for AI in Education. Pearson Education.
- Nussbaum, M. C. (1992). Human functioning and social justice. Political Theory, 20(2), 202–246.
- O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing.
- Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.
- Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.