Military necessity is not a justification that can be used to ignore other concerns. An act can be justified only if it is of a kind that is physically necessary, indispensable, or unavoidable in pursuit of the goal of the war. Military necessity does not mean that one may do whatever it takes to win. On the contrary, both the Lieber and the British formulations make it perfectly clear that necessity is itself constrained by the Law of Armed Conflict. If an act is prohibited by the Law of Armed Conflict, it may not be done. If it were the case that one could attain one’s end in the fighting only by doing what was prohibited by the Law of Armed Conflict, one would have to accept failure. Fortunately, by conscious design over the centuries, the Law of Armed Conflict does not, in fact, prohibit all actions that are sufficient to attaining a ‘legitimate purpose’. This means that ‘otherwise, we will not win’ is rarely an applicable – and never a good – reason for violating the law of war.
In sum, military necessity is not a positive authorization saying that ‘one may do whatever it takes (legal or illegal)’. Military necessity is fundamentally a prohibition – a prohibition on wasteful and pointless destruction – and may thus best be captured in the British manual’s negative formulation: ‘The use of force which is not necessary is unlawful, since it involves wanton killing or destruction.’
Disclaimer
This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context.
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.