Principles into Practice 4 4 4 4
Who or what should be considered stakeholders for AI-enabled systems?
Filed under:
Human Centricity
Responsibility
Understanding
Bias and Harm Mitigation
Reliability

AI systems may have multiple stakeholders, depending on what the system is required to do and where it will be operating.  JSP 936 Part 2 Appendix B has a Stakeholder Identification Tool that can assist in working out what is appropriate to consider for your system using the following headings: 

  1.  Creators 
  2.  Operators  
  3.  Executors  
  4.  Decision subjects 
  5.  Data subjects 
  6.  Examiners 
  
 
Creators 
Creators represent those agents that create the system. This category is broader than just coders and includes a wide variety of aspects of the system, including documentation, training and maintenance. It also includes the owners of the systems and those responsible for procurement. 
  •  Example creators: Data scientists, data engineers, procurement managers, intellectual property (IP) owners, data architects, training development coordinators 
  
 
Operators 
Operators are the agents that interact directly with the system by providing inputs and receiving outputs. They may be able to interact with creators. These are the core users of the system. 
  •  Example operators: Intelligence analysts, UXV operators, logisticians. 
  
 
Executors 
Executors are agents make decisions informed by the AI system. This means they are not always distinct from the operator. 
  •  Example executors: Intelligence analysts, operational commanders, tactical operators. 
 
 
Decision-subjects 
Decision subjects are agents who are affected by decisions made by executors. The ability to engage with these stakeholders will vary based on the specific system. For instance, decision-subjects for an AI-enhanced human resources (HR) system used to allocate career postings would be (relatively) straightforward to engage given they are MOD employees. However it may be impossible to directly engage with decision-subjects of an AI-enabled intelligence, surveillance and reconnaissance (ISR) system, where they are part of a military adversary. 
  •  Example decision subjects: Adversary combatants, civilians living in the area of operations, MoD personnel. 
 
 
Data-subjects 
Data-subjects are agents whose data have been used to train the AI system (in the case of machine learning). Where personal data is collected, stored and processed, more stringent ethical and legal requirements apply to protect individuals; particularly, compliance with the UK General Data Protection Regulation. 
  • Example data subjects: Adversary combatants, individuals posting on social media, individuals whose images are included within open source databases. 
  
 
Examiners 
Examiners are agents who audit, test or investigate an AI system. This may involve them also being responsible for other roles in the system. It is not unusual, for instance, for the creator to also undertake (or support) the testing, evaluation, validation and verification of a system. 
  • Example examiners: Data scientists, lawyers, policy advisors. 

JSP 936 Part 2 Appendix B also has a number of useful suggestions for how to engage with these different stakeholders, depending upon what outcomes you are hoping to achieve and answers you are hoping to generate.

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.