Principles into Practice 1 1 1 1
What do we mean by “across the entire lifecycle”?
Filed under:
Human Centricity
Responsibility
Understanding
Bias and Harm Mitigation
Reliability

AI systems must be evaluated throughout their lifecycle, from conception to decommissioning, to address evolving risks and ensure ongoing compliance. At the very least, this lifecycle will include: 
 
  • Planning: Establish objectives to align algorithms, data management, and tools with operational requirements.
  • Development: Incorporate good practice standards (coding, data, testing) and control transitions between phases.
  • Deployment: Manage controlled integration into wider systems, allowing for adaptability in real-time use (e.g., online learning capabilities).
  • Monitoring: Regularly assess system performance, reporting incidents and malfunctions, and then acting on that information to ensure reliability.
  • Decommissioning: Manage obsolescence and environmental impacts to prevent residual risks.

Joint Service Publication (JSP) 936, "Dependable Artificial Intelligence (AI) in Defence" has a Section 6 AI Lifecycles, noting that different digital systems may require or involve different lifecycles. This has detailed notes on how to think about multiple stages, expanding on the ones listed above.
 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.