In a bold move, the U.K.’s data privacy watchdog, the Information Commissioner’s Office (ICO), has stepped in to halt LinkedIn’s practice of using British users’ content to train artificial intelligence models. LinkedIn recently initiated a program to use its platform participants’ data and content to develop AI models, automatically enrolling users without formal notification — a strategy that excluded only those in the U.K. and Europe.
On Friday, the ICO publicly acknowledged its intervention, with Stephen Almond, the executive director of regulatory risk at the ICO, confirming the regulator’s success in preventing LinkedIn from deploying this feature within Britain.
“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users,” Almond stated. According to the ICO, LinkedIn agreed to suspend the training program in the U.K. and will engage further with the regulator.
LinkedIn’s Controversial Move
The social networking giant had quietly begun utilizing user-generated content to train AI models before formally updating its terms of service and privacy policies, a change that was first reported by 404 Media. Once the story broke, LinkedIn quickly updated its privacy policy to include details about how the platform would use private data to develop and train AI models. The update explains that the data is leveraged to “gain insights with the help of AI, automated systems, and inferences, so that our services can be more relevant and useful to you and others.”
While LinkedIn’s new AI feature is being used in regions outside the U.K. and Europe, these regions remain shielded by stringent data privacy laws that prevent such automatic opt-ins.
Opting Out of AI Model Training
For users concerned about privacy, LinkedIn has added an option to opt out of the AI training program. By navigating to account settings, users can locate the “Data for Generative AI Improvement” button and toggle off the automatic participation in the program.
The company also posted a frequently asked questions (FAQ) page explaining how personal data is being used for AI model training. This page includes a warning that personal data “may be used (or processed) for certain generative AI features on LinkedIn.” It further clarified that when users engage with LinkedIn’s AI-powered tools, personal data such as inputs, usage information, and feedback may be processed.
Concerns About Privacy
Following the initial news, LinkedIn’s general counsel posted a notice clarifying the updated user agreement and privacy policy. The company explained that the updates were intended to provide transparency on how user information is used to develop LinkedIn’s AI tools. Despite this, the sudden rollout of the changes, without formal notice to users, has raised significant privacy concerns.
As AI technology continues to advance, more platforms are integrating it into their services. However, this also introduces growing concerns over the protection of personal data, especially when users are automatically opted in without explicit consent. The ICO’s intervention is a reminder that, in regions like the U.K. and Europe, strong data privacy laws offer a critical layer of protection against such practices.
LinkedIn users outside of these regions will need to stay vigilant, as the platform continues to refine and expand its AI tools, which may rely heavily on the content and interactions provided by its vast user base.


At least that’s one thing the UK powers are doing right. I’ve exercised my right to opt out of having my data used in this way – if only I could be sure that someone, somewhere wasn’t doing so. 😐
Thank you for your comment! It’s great to see the UK taking steps in the right direction by restricting companies like LinkedIn from using user data to train AI models without explicit consent. The ability to opt-out is a crucial tool for data privacy, but like you mentioned, it’s tough to shake the feeling that your data might still be used in ways you’re not aware of.
One of the challenges with data privacy is the lack of transparency in how platforms store and use information after we’ve opted out. Even when regulators step in, users can often feel unsure about whether these protections are fully enforced behind the scenes. As tech companies and regulators wrestle over the boundaries of AI, it’s vital that users stay informed and continue exercising their rights.
Hopefully, this marks a positive shift toward stronger protections globally and encourages more platforms to follow suit in prioritizing user privacy over profit-driven AI training practices. 😎
That positive shift has to come, or it’s the rise of the machines as per the Arni film, and I’ll be looking for a bunker to wait it out. Technology is such a huge part of our lives now that legislation to protect us from the dark side has to be enabled. Thanks for your own vigilance over this and other such issues. 🙂
You’re welcome! I completely agree—while it’s easy to joke about the Terminator scenario, AI has the potential to become quite concerning if it continues to evolve unchecked. While we’re not there yet, it’s very possible that in the longer years to come, AI could evolve to the point where it can replicate itself and, if coupled with robotics, things could definitely get ugly.
That said, we’re still quite a ways off from that level of advancement, but the fact that we can envision it means that legislation and ethical guidelines need to be developed now to safeguard against that future. By the time AI reaches those levels of autonomy, it will be much harder to control, and that’s why we need proactive steps now. 😎
Thank you, and I hear you. My joking is always ironic, because it’s no laughing matter. As an author I deplore the incursion of AI into the realm of the creative, which can potentially stifle and overcome the individual voices pointing up the social issues which need attention. How easy then for those controlling societies to use the creative arts as a medium to pass those messages they wish us to swallow? It’s been done before and is still going on, I’m sure. It has to be held in check, otherwise hello ‘1984’. 🤨
Thank you for your thoughtful response, Laura! You’re absolutely right—this is no laughing matter, and your concerns as an author are completely valid. The incursion of AI into creative spaces can indeed threaten individual voices, especially those that seek to highlight critical social issues. When AI becomes a dominant force in creativity, it could lead to a homogenization of artistic expression, allowing the controlling powers to push messages that align with their interests while silencing or diluting those that challenge the status quo.
Your point about how the arts have been used as a medium to control narratives is particularly important. We’ve seen throughout history how regimes and institutions have manipulated art and literature to propagate certain ideologies—whether subtly or overtly. If AI continues to infiltrate the creative realm unchecked, it becomes easier for those in control to shape public perception and stifle dissenting voices, much like the dystopian scenario Orwell envisioned in 1984.
This is why we need to remain vigilant and continue advocating for the protection of authentic creative voices. AI may bring technological advances, but creativity needs to remain a human domain where individual expression can flourish without undue influence.
Thank you again for raising this important issue—I share your concerns, and it’s discussions like these that keep us grounded in the reality of what’s at stake.
And thank you for highlighting them in your posts and assisting me in thinking them through, helping to focus my thoughts. These discussions are so important. Have a good day. 🙂
You’re welcome! I hope you have a good day as well. 😎
“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users.” Read: LI was threatened with massive fines and regulatory micromanagement and saw the light 😂
Thank you very much for your comment, Darryl! It’s always interesting to see how companies tend to “reflect” when there are significant consequences looming, especially in the regulatory space. Sometimes it takes the threat of fines and stricter oversight for them to “see the light,” as you put it! It’s a reminder of the power that regulations and accountability can have when it comes to protecting users. I hope you have a great day! 😎