Professional networking platform LinkedIn said it would begin to use the platform’s users’ data to train its artificial intelligence (AI) model starting from November 20, 2024.
The company disclosed this in an update sent to its users over the weekend.
It is, however, excluding some countries with tight data protection laws from its AI training program.
“At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom,” the company stated.
For users outside the excluded regions, LinkedIn said it has made available an opt-out setting for anyone who chooses not to make this information available for this purpose.
“As our product evolves to leverage generative AI, we have given you more information in our Privacy Policy by adding language on how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation (“generative AI”) and through security and safety measures,” the company stated.
To opt-out, users should go to “Settings & Privacy,” select the “Data Privacy” tab in the left-hand column, and then click “Data for Generative AI Improvement” and toggle the button off.
The platform noted, however, that “opting out means that LinkedIn and its affiliates won’t use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place.
”That means there’s no going back and undoing the training of earlier LinkedIn AI systems with user posts.
The updated User Agreement released by the company includes more details on how LinkedIn will handle content recommendations and moderation on its platform.
Additionally, the agreement now includes provisions related to the use of generative AI tools, which are being integrated to help creators and professionals expand their reach and build their personal brands more effectively.
A notable change involves updates to the licensing terms that allow creators to distribute their content more broadly, leveraging LinkedIn’s expanding AI capabilities.
According to LinkedIn, this is in line with its continued focus on supporting its users in growing their brands and enhancing visibility within and beyond the platform.
LinkedIn is not the only social media platform harvesting users’ data to train their models.
Elon Musk’s X in its latest policy update also requires users to opt-out if they do not want their posts used to train its AI chatbot, Grok, which has come under fire for things like spreading false information about the 2024 election and generating violent, graphic fake images of prominent politicians.
The platform says it, and Musk’s xAI startup uses people’s posts, as well as their conversations with Grok, to do things like improve its “ability to provide accurate, relevant, and engaging responses” and develop its “sense of humour and wit.
”Facebook company Meta also recently acknowledged that it has already used public (but not private) posts from Facebook and Instagram to train its AI chatbot.
In its privacy policy, Meta says it may train its AI systems with users’ public Facebook and Instagram content, including posts, comments, audio, and profile pictures.