Principles into Practice 16 16 16 16
How much understanding is sufficient? Who needs to know what?
Filed under:
Understanding

Different people will require different levels of knowledge and understanding. The understanding required by stakeholders varies based on their roles. How do we decide the appropriate depth of understanding for each role (e.g., developer, operator, commander, the public, or policymaker)? 

“Appropriate understanding” therefore means defining the minimum knowledge required by stakeholders for safe and ethical engagement across the stakeholder groups. Levels of understanding must be tailored to their operational environment, and should be sufficient to permit each stakeholder to be able to trust that their decisions are based on sufficient information. Appropriate understanding per stakeholder group should equip them with the depth necessary to fulfil their responsibilities: 
  • Operators require simple, actionable insights for real-time decision-making. This means practical, procedural knowledge to deploy, monitor, and manage systems effectively. Practical understanding of system behaviours, fail states, and operational limits. 
  • Decision-makers, who may sit above the actual operators require a broader understanding of system capabilities, ethical implications, and risk management. Awareness of strategic risks, governance frameworks, and ethical considerations. 
  • Supervisors need a broader view of system performance and risk. 
  • Developers and Technologists require In-depth technical expertise, including system design, data provenance, fail state identification, and potential vulnerabilities. This requires such things as full technical documentation, including model architecture, and data lineage. 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.