Meta to Use Facebook and Instagram Personal Data for AI Training – GDPR rights and considerations
Meta to Use Facebook and Instagram Personal Data for AI Training – GDPR rights and considerations
22 April 2025
Background
From the end of May 2025, Meta (the parent company of Facebook and Instagram) plans to use data shared by adult users in Europe—including posts, photos, and comments, past and future—to train its artificial intelligence (AI) systems. As this will invariably involve the personal data of Facebook and Instagram users, this development raises serious data protection and privacy concerns under the EU General Data Protection Regulation (GDPR).
Regulatory Context
Meta aims to incorporate all publicly visible user data into the development of its AI tools, such as the Meta AI chatbot and envisioned LLMs. While this move was previously halted in light of regulatory intervention, it now appears to be moving forward—albeit with an ‘opt-out’ mechanism introduced for users, in the apparent pursuance of compliance with the GDPR’s provisions.
This change could represent a significant repurposing of personal data, triggering concerns about compliance with several core GDPR principles.
Primary data protection concerns identified
Lawful Basis & Transparency
Under article 6 of the GDPR, every instance of personal data processing must have a valid legal basis. Meta appears to be relying on ‘legitimate interests' to process user personal data. It should be borne in mind that in terms of recital 47 of the GDPR, these interests must not override users’ fundamental rights and freedoms—especially when personal data is reused in ways users would not reasonably expect, such as AI training.
Moreover, the principle of transparency under articles 12 to 14 and article 5(1)(a) of the GDPR requires that individuals be clearly and fully informed about how their personal data is or will be used. The concerns to data protection and privacy lies in users not realizing that even old posts or images may be swept into AI development—without their active consent.
Purpose Limitation and Data Minimization
The GDPR’s purpose limitation principle (article 5(1)(b) of the GDPR) restricts data processing to specific, clearly stated purposes. Repurposing data originally shared for social interaction to train AI models represents a substantial shift that may not ordinarily be seen to be compatible with the original purposes of collection.
Additionally, article 5(1)(c) of the GDPR requires that only data necessary for the stated purpose be collected and processed. The broad unrestricted use of all public content, regardless of relevance or sensitivity, may hence lead this into question.
Right to Object
Since Meta’s legal basis hinges on “legitimate interest,” users have the right to object to this type of processing, pursuant to article 21 of the GDPR. This is why Meta has now introduced an opt-out process. It is however imperative to stress that this objection must be exercised before the end of May 2025 to prevent one’s past personal data on the Facebook and Instagram platforms from being utilised as part of AI training.
Irreversibility and the Right to Erasure
Once data is used to train AI models, it is according to accepted AI principles, irrevocably embedded into those models. This may conflict with article 17 of the GDPR, the right to erasure, or ‘the right to be forgotten’ as data used for training cannot, at this point at least, ordinarily be “unlearned” or extracted from an AI system—a technical limitation that fundamentally undermines users’ ability to exercise this data protection right.
Exercise of rights by Data Subjects
If you are a user of Facebook or Instagram and you do not want your personal data used to train AI models, you must explicitly object using Meta’s provided links:
Facebook: https://www.facebook.com/help/contact/712876720715583
Instagram: https://help.instagram.com/contact/767264225370182
Objections submitted after May 2025 will only prevent future data use. Accordingly in such a case, any personal data which may have already been used for training up to that point in time, would not be able to be withdrawn from the AI model, due to the current technical limitations of AI model development, as referred to above.
