Human flourishing refers to a state where individuals experience optimal well-being, functioning, and fulfilment across various aspects of life. It clearly resonates with the Human Centricity principle in the UK’s AI Ethics framework. Rooted in Aristotelian philosophy, the term "eudaimonia" is often translated as "human flourishing" and encompasses living in accordance with virtue and realizing one's potential.
What does “human flourishing” cover?In contemporary discussions, human flourishing is viewed as a multifaceted concept that includes:
• Happiness and Life Satisfaction: Experiencing positive emotions and a sense of contentment with life.
• Health: Maintaining both mental and physical well-being.
• Meaning and Purpose: Engaging in activities that provide a sense of direction and significance.
• Character and Virtue: Cultivating personal strengths and moral excellence.
• Close Social Relationships: Building and sustaining deep, meaningful connections with others.
In positive psychology, flourishing is defined as "when people experience positive emotions, positive psychological functioning and positive social functioning, most of the time," living "within an optimal range of human functioning." Overall, human flourishing denotes a life in which suffering is minimized, and individuals fully express their essential human capacities, leading to a state of optimal functioning and well-being.
What does this mean for AI in Defence?The adoption of artificial intelligence (AI) presents both opportunities and challenges for human flourishing. AI has the potential to enhance well-being by improving healthcare outcomes through advanced diagnostics, personalised treatments, and mental health support systems (Topol, 2019). It also boosts efficiency and productivity by automating repetitive tasks, allowing individuals to focus on more meaningful and creative work (Brynjolfsson & McAfee, 2014). AI-driven educational platforms can personalise learning experiences, fostering intellectual growth and improving access to knowledge (Luckin et al., 2016). Furthermore, AI can aid decision-making by providing data-driven insights and fostering social connections through translation tools and recommendation systems (Floridi & Cowls, 2019).
However, AI also poses significant risks to human flourishing, particularly when considered in a defence setting. The increasing reliance on AI can lead to privacy erosion, as data-driven algorithms collect and utilise personal information, raising concerns about surveillance and data security (Zuboff, 2019). Economic disruption is another concern, with AI-driven automation threatening to displace jobs, potentially exacerbating economic inequality and social unrest (Frey & Osborne, 2017). Additionally, AI systems can perpetuate societal biases if they are trained on biased data, reinforcing discrimination in hiring, lending, and law enforcement (O’Neil, 2016). The loss of human agency is another potential downside, as over-reliance on AI may diminish critical thinking and decision-making skills (Carr, 2014).
How can I incorporate such an idea into my AI-enabled system?To ensure AI supports rather than hinders human flourishing, it is essential to develop ethical guidelines, promote inclusive policies, enhance accountability, and encourage public engagement (Floridi et al., 2018). By implementing responsible AI governance and ethical considerations such as those found in the UK MoD’s AI Ethics Principles, societies can harness AI’s potential for positive transformation while mitigating its risks.
A capability approach, drawn from the work of Martha Nussbaum, can be applied to provide useful ethical guidance for the responsible use of AI in contexts such as the military. Two tests, taken together, answer what is required to justify a particular application:
• Threshold Test: Does the technology reasonably protect people’s minimum capabilities? This covers a range of considerations, from health, sensory and emotional expression, through to the ability to reason and make one’s own choices.
•
Flourishing Test: Does the technology increase people’s capabilities and enable them to lead better lives as a result? The test relates to the abilities, hopes and desires of the individual, rather than seeing them as simply a tool that can be modified for the benefit of an organisation or 3rd party.
While clearly such a combination of tests need to be viewed through the lens of the military environment that the system will be deployed in, these questions provide a useful starting point to consider when developing your AI-enabled system.