Principles into Practice 5 5 5 5
How do you identify ethical risk?
Filed under:
Human Centricity
Responsibility
Understanding
Bias and Harm Mitigation
Reliability

Self-assessment. This starts with a 5-question primer in JSP 936 Part 2 which provides a really good grounding for any AI-related project, and includes the use case library where you can draw on the work already done in this area – there is no need to reinvent the wheel! 
 
Targeted stakeholder engagement. You don’t have to do this alone – MoD partners can assist in stakeholder identification or connection, to make sure you talk with and consider the right people. JSP 936 Part 2 also provides a collection of short, easy-to-use one page method cards that will assist in working out how and who best to approach for which kind of challenge. 
 
Consult the experts. Again, just as with stakeholder engagement, this is not something that you need to do in isolation. MoD partners may be able to connect you with appropriate SMEs, or help you identify them if not already known. Once you have taken advice, it may be that certain kinds of system development may be appropriate to consider the idea of an independent ethics panel that can advise or provide oversight in a structured way. 
 
See card: How can the MOD AI Principles help me assess and manage ethical risk? 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.