Principles into Practice 14 14 14 14
What does GDPR mean for my AI-enabled system?
Filed under:
Human Centricity
Responsibility

General Data Protection Regulation (GDPR) was established to safeguard individuals' privacy and regulate the processing of personal data. See Human Centricity. 
 
In the realm of technological development, GDPR is crucial as it sets stringent guidelines for collecting, processing, and storing personal information. Compliance with GDPR is not just a legal obligation but a strategic imperative for tech development, aligning innovation with the principles of privacy and human rights. 
 
Developing AI systems presents significant implications under the General Data Protection Regulation (GDPR), particularly concerning the processing of personal data. One critical area is data minimisation and purpose limitation, as AI systems often require large datasets for training. This can lead to over-collection or the use of personal data beyond its original purpose. To comply with GDPR, developers must ensure data is collected only for specific, explicit, and legitimate purposes, minimising the amount of personal data used. Open ended or broad consent for whatever the future may bring is no longer permitted under GDPR, so this is another area where consent may need to be considered more as a dynamic than a static concept. (See card: What does "informed consent" mean?)

Another key consideration is establishing a lawful basis for processing. AI systems must operate under a valid legal basis, such as consent, contractual necessity, or legitimate interests. Developers are responsible for documenting and justifying this basis for all data processing activities. Additionally, GDPR requires transparency and explainability. Organisations must provide individuals with clear and accessible information about how their data is processed. Given the complexity of AI systems, developers must ensure models are interpretable and provide meaningful explanations about the logic, significance, and consequences of automated decisions. 
 
Automated decision-making and profiling are also governed by GDPR, specifically Article 22, which restricts decisions made solely on automated processing that significantly affect individuals. Organisations must obtain explicit consent or demonstrate necessity for contract performance, ensuring systems offer human intervention when applicable. To align with GDPR's principles of data protection by design and by default, privacy-preserving techniques such as anonymisation, pseudonymisation, and encryption should be embedded into AI systems from the outset. 
 
AI systems must also uphold data subject rights. This includes accommodating requests for access, rectification, erasure, restriction of processing, and data portability. Developers must design systems to facilitate these rights and ensure efficient mechanisms for processing such requests. Maintaining data accuracy is another essential requirement, as inaccuracies in personal data can lead to biased or harmful outcomes. Organisations must take steps to ensure personal data is accurate and up to date. 
 
The GDPR further emphasises accountability and record-keeping, requiring organisations to demonstrate compliance. This involves maintaining records of processing activities, conducting Data Protection Impact Assessments (DPIAs), and ensuring audit trails for AI development. Additionally, data breaches pose a risk, especially given the vulnerabilities AI systems may introduce, such as re-identification risks or exposure to malicious attacks. Developers must implement robust security measures and establish breach response protocols. 
 
Lastly, cross-border data transfers can present challenges if AI development involves transferring data outside the European Economic Area (EEA). GDPR requires appropriate safeguards, such as Standard Contractual Clauses (SCCs) or adequacy decisions, to protect personal data during such transfers. Addressing these implications requires close collaboration between legal, technical, and ethical teams to ensure compliance while enabling innovation in AI development. 

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.