LinkedIn’s AI Data Training Plans Raise Global Privacy Questions: Lessons for Nigerian Users

The recent warning from the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) has once again brought the growing tension between innovation and individual privacy to the forefront. The regulator is urging all LinkedIn users to review their privacy settings and opt out of having their personal data used to train artificial intelligence (AI) models before November 3. This warning is not only a wake-up call for LinkedIn’s European users but also raises important questions for professionals in Nigeria about how our own personal data may be handled on global digital platforms.
What LinkedIn Plans to Do
LinkedIn, a company owned by Microsoft, announced plans to use various categories of user data, including names, profile photos, work experience, and even public posts, comments, and polls to train its AI systems. The company says this will help improve user experience and enhance AI-driven features. However, unless users explicitly opt out before November 3, their data will automatically be included in this training process.
While this may sound like a technical adjustment, the implications are significant. Once personal information is absorbed into an AI model, it cannot be easily traced or removed. As the Dutch DPA’s Vice-Chair, Monique Verdier, rightly pointed out, many users shared their data years ago without any expectation that it would one day be used for AI development.
The Risks of Using Personal Data for AI Training
The use of personal data in AI systems presents a fundamental challenge, which is loss of control. When our data is fed into a model, it can be reproduced, inferred upon, or used in ways that are hard to predict or regulate. Moreover, if the data includes sensitive information such as a person’s health status, ethnicity, religion, or political beliefs, the risks increase substantially. Even if such details were shared unintentionally or indirectly, AI systems could still pick up on them through patterns and correlations.
The Dutch regulator’s concerns are well-founded. Once personal data has been integrated into AI models, removing that data becomes practically impossible. This means users could permanently lose control over how their personal information is used.
Key Data Protection Principles at Stake
From a regulatory standpoint, this case touches on several foundational principles of data protection under the EU’s General Data Protection Regulation (GDPR) notably consent, purpose limitation, and transparency.
Consent: Users must provide clear and informed consent before their data is repurposed for a new objective, especially one as broad as AI training.
Purpose Limitation: Data collected for professional networking should not automatically be reused for unrelated functions without proper legal justification.
Transparency: Platforms have a duty to clearly explain how user data will be processed and to offer real, accessible choices to users.
These principles are not unique to Europe. They resonate strongly with Nigeria’s own data protection framework.
The Nigerian Context
Under the Nigeria Data Protection Act (NDPA) 2023, organizations must ensure that personal data is processed lawfully, fairly, and transparently. Consent remains a key requirement, and individuals have the right to withdraw it at any time. The Nigeria Data Protection Commission (NDPC), which oversees enforcement, also emphasizes the principle of data minimization to ensure that only data necessary for a defined purpose is collected and processed.
If a platform operating in Nigeria attempted to repurpose users’ personal data for AI training without obtaining specific consent, it could face scrutiny under the NDPA. The Commission could demand evidence of user consent, issue compliance directives, or even impose sanctions for violations.
This underscores the need for Nigerian professionals and organizations to remain proactive about data governance. The era of “click and forget” privacy settings is over. Every digital decision now has long-term implications.
Practical Steps for Nigerian LinkedIn Users
For Nigerian LinkedIn users and indeed, all professionals active on social media, here are a few steps to stay in control of your personal data:

  1. Review Your Privacy Settings: Go to your LinkedIn account settings and check how your data is being used. Look specifically for the AI data training or personalization sections.
  2. Opt Out if Unsure: If you are not comfortable with your data being used for AI purposes, exercise your right to opt out before November 3.
  3. Be Mindful of What You Share: Remember that anything posted publicly could be repurposed in ways you may not anticipate.
  4. Stay Informed: Follow updates from both LinkedIn and the NDPC to understand how global and local data protection developments may affect you.
    Conclusion
    The Dutch DPA’s warning to LinkedIn users is not just a European story. It is a global reminder of how quickly the line between innovation and privacy can blur. As AI becomes deeply integrated into our professional and personal lives, data protection is no longer a compliance box to tick but a digital survival skill.
    For Nigerian users, this is the time to stay alert, exercise your data rights, and demand accountability from the platforms that host your professional identity. Privacy is about control, consent, and choice in a digital age that often moves faster than regulation.

Add a Comment

Your email address will not be published.