Journey tree

UK MOD AI Ethics Principles: Getting Started

This toolkit has been developed to provide you with a flexible resource for understanding and applying AI ethics in UK Defence environments. You can jump in anywhere you like, but we suggest you start with card 1 from the trunk - What are the UK MOD AI Ethics Principles and why do we have them?

You can easily move around the knowledge tree which is organised to allow you to explore the subject as you wish while keeping a clear idea of where you are within the broader topic. Just click on Home to come back to the whole tree at any time. The Trunk of the tree introduces the UK’s AI Ethics Principles. The Roots allow you to explore the foundation of broader ethical thinking and traditions and how this has contributed to the AI Ethics Principles we have today. The Leaves  demonstrate how to apply the top-level principles in practice – operationalising concepts in the real world. Finally, the Fruit hanging from the tree provide some case studies of how the ethical risks of different AI-systems might be discovered, assessed and mitigated.

About

Different people will want to start in different places and may also require different journeys through this tool. The Knowledge Tree can be navigated as a sequential process — from foundational learning to applied case studies — each part of the tree, or the individual elements within it, can also be explored independently. This flexibility allows you to tailor your learning experience to your immediate needs or areas of interest, whether you are seeking a broad understanding of the MoD AI Ethics Principles by starting at the Trunk, or wish to delve into the Leaves to explore some of the specific challenges in applying those Principles into Practice.

The research that contributed to the development of this tool demonstrated that many of the STEM experts that work in this area developing AI systems for the military often have not formally studied ethics before. They may also have little understanding of military organisational values and the impact that these have on shaping the behaviour of those in the military. This is why we have included the Ethics Foundations as the Roots to show how wider ethical thinking, and professional norms, have influenced the AI Ethics Principles we have today. The Fruits of the Tree represent case studies demonstrating how JSP 936 can contribute to structured risk assessment so that risks can be properly assessed, considered and mitigated.

This tool builds on the Military Ethics Education Cards and Apps that were developed by Prof David Whetham, Director of the Centre for Military Ethics at King’s College London. These tools have been translated into 12 languages and have been used successfully by professional militaries around the world to promote ethics discussions and the sharing of best practice. We applied the same methodology here, conducting hundreds of interviews with stakeholders to develop the structure and content for AI developers working with Defence.

We hope that this resource will continue to expand, with the addition of new material, links to useful resources and the addition of appropriately anonymised real world ethical risk assessments in the Fruits section. If you have helpful suggestions or feedback, please contact us on the feedback link below.

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.

Produced in Partnership with

King's College London Compass Ethics

Other Partners

Aleph Insights

Contact

Please send feedback, comments or suggestions to:
feedback@militaryethics.uk