Privacy concerns are at the forefront of many users’ minds as more tech companies utilize AI models trained with user data without explicit consent. LinkedIn, owned by Microsoft, has recently come under scrutiny for opting users into training its AI models without informing them. This move follows similar actions taken by tech giants Meta and X’s Grok. LinkedIn clarified that user data will not be used to train base OpenAI models, but will be shared with Microsoft for its own AI software.
According to LinkedIn spokesperson Greg Snapper, the artificial intelligence models used by the platform may be trained by LinkedIn or another provider. Some models are provided by Microsoft’s Azure OpenAI service, but data is not sent back to OpenAI for training purposes. LinkedIn claims to minimize personal data in the datasets used to train AI models and ensures that content-generating models are not trained on data from the EU, EEA, or Switzerland. The platform also offers users the option to opt-out of data training for AI models.
Despite these assurances, privacy activists remain concerned about the lack of explicit consent for training AI models with user data. Mariano delli Santi from the Open Rights Group criticized LinkedIn’s opt-out model, stating that it is not sufficient to protect users’ rights. He emphasized the importance of opt-in consent as a legal and common-sense requirement, calling for regulatory action against companies that fail to adhere to privacy laws.
Users who are uncomfortable with LinkedIn using their data for AI training can easily disable this feature in the data privacy settings. By toggling the “Use my data for training content creation AI models” option off, users can prevent their information from being used to train AI models. This control over data usage is crucial in ensuring that users have a say in how their information is utilized by tech companies, particularly when it comes to AI development.
The issue of AI training with user data is not limited to LinkedIn, as other tech companies have also been criticized for similar practices. Meta, formerly known as Facebook, and X’s Grok have faced backlash for training AI models with user data without clear consent from users. As the use of AI continues to grow in various industries, the importance of privacy regulations and transparency in data usage becomes increasingly important to protect users’ rights.
In response to privacy concerns, companies like LinkedIn must prioritize transparency and consent when training AI models with user data. By ensuring that users are fully informed and have the option to opt-in or opt-out of data training, tech companies can build trust with their user base and demonstrate a commitment to respecting privacy rights. As the debate over data privacy and AI development continues, regulatory bodies are called upon to enforce compliance with laws and hold companies accountable for violating users’ rights.