Principles into Practice 24 24 24 24
Measuring Security: how does one decide if an AI system is “suitably” secure?
Filed under:
Reliability

Security in the context of AI development for defence applications refers to the protection of AI systems, data, and operational frameworks from potential threats and vulnerabilities that could compromise their functionality, integrity, or trustworthiness. This involves safeguarding against cyberattacks, espionage, misuse, and other risks that may arise during the development, deployment, or operation of AI systems. Given the strategic importance of defence applications, security encompasses both technical measures—such as encryption, secure coding practices, and robust system architecture—and procedural safeguards, including rigorous testing, access controls, and adherence to international standards (Brundage et al., 2018). For definitions of Reliable, Robust, and Secure, see card: What does Reliability mean in the context of AI development for UK Defence? 
AI systems in defence are particularly susceptible to a range of risks. These include adversarial attacks, where malicious actors manipulate input data to deceive AI algorithms, and data poisoning, where training datasets are deliberately corrupted to skew outcomes. Additionally, AI systems may face exploitation through reverse engineering or unauthorised access to sensitive algorithms and data. Ensuring security also means protecting against unintended vulnerabilities, such as biases or errors in the system, that adversaries could exploit. For example, autonomous systems in military contexts must be designed to avoid unintended escalation due to system errors or adversarial manipulation (Royal United Services Institute, 2021). 
 
How much security is enough? 
Determining how much security is enough for AI in defence applications depends on the system’s intended use, the potential consequences of failure, and the adversarial environment in which it operates. High-stakes applications, such as autonomous weapon systems or AI-driven surveillance platforms, require stringent security measures to mitigate risks that could lead to catastrophic consequences, including loss of life, operational failure, or escalation of conflicts. Security must be proportionate to the level of risk, ensuring robust protections without impeding the system’s functionality or efficiency (Goodman, 2020). 
An adequate security framework should address not only immediate threats but also long-term vulnerabilities, as adversaries may exploit weaknesses over time. This requires a layered security approach, combining technical defences with organisational measures such as regular audits, ongoing threat assessments, and dynamic updates to AI systems. The concept of “resilience” is also crucial—AI systems must be capable of maintaining functionality or quickly recovering in the face of security breaches. International cooperation and the establishment of norms for the secure and ethical use of AI in defence can further enhance security by creating shared standards and reducing the risk of arms races or misuse (Scharre, 2018). 
Ultimately, while it may not be possible to achieve absolute security, the goal is to implement measures that minimise risks to an acceptable level while enabling the system to perform its intended mission effectively. Striking this balance requires a comprehensive, adaptive approach that integrates security considerations into every stage of AI development and deployment. 
 
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., et al. (2018). "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation." Future of Humanity Institute. 
Goodman, M. (2020). "Securing AI in Defense Applications: A Practical Guide." Journal of Defense & AI Security Studies, 3(1), 12-28. 
Royal United Services Institute (RUSI). (2021). The Impact of Artificial Intelligence on National Security: Opportunities and Risks. 
Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W.W. Norton & Company. 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.