A new lawsuit accuses Microsoft-owned app LinkedIn of collecting and sharing private user data, including direct messages, for AI training without proper consent.
Job seekers alongside recruiters face data exposure risks from usage with third-party firms.
LinkedIn Faces Lawsuit After Users Accuse Them of Unauthorized Data Sharing
Data represents the most valuable asset for training artificial intelligence (AI) models. For one of the biggest players in the AI space, Microsoft, a recent lawsuit over inappropriate data use has dragged it into the courtroom, Android Headlines reports.
A complaint filed against the professional networking site LinkedIn recently ended, which claims that the company has been collecting and sharing private user information without full consent.
What's the Problem with LinkedIn Data Sharing?
The core of the lawsuit is that LinkedIn allegedly quietly shared some user data with third parties to help train AI models.
In fact, the information in question mostly is of a sensitive nature-for example direct messages (DMs). Many would expect such data to be private.
Typically, companies offer users the option to consent to share their data for improving services, and users can opt-out if they choose. However, LinkedIn's lawsuit centers on the claim that the platform introduced a new default setting to collect data without informing users.
The lawsuit argues that many individuals may be unknowingly sharing their private conversations and other sensitive data as part of the AI training process.
Accusations of Uninformed Data Collection
Like many other platforms, the job-hunting platform allows the users to share data usage with the company. These opt-in setups usually come along with promises to keep the information private and confidential and not track the individual personally.
However, in the case of LinkedIn, it is stated that the firm did not let the users know when it discreetly turned to a new setting, which collected private messages and similar data for its AI purposes.
The lawsuit alleges that while theoretically, users are free to opt out of sharing their data, in practice, most users do not know what is happening to their data and thus have unwittingly agreed to the sharing of it. It is also claimed that LinkedIn has amended its FAQs to include the existence of the new setting but has not specified that any data already shared would still be used for AI training purposes.
Plaintiffs Seeking Damages From LinkedIn
The plaintiffs, who are suing on behalf of affected LinkedIn users, are seeking significant damages—$1,000 for each individual whose data was allegedly shared without proper consent. This lawsuit is indicative of a growing concern regarding privacy in the digital age, particularly as companies continue to rely on massive datasets to develop and improve AI systems.
LinkedIn Denying the Allegations
A BBC report tells us that the company is refuting the claims seriously, as the allegations were grave. A LinkedIn spokesperson described the lawsuit as a "false claim with no merit."
The company further added that it upholds its commitment to user privacy and transparency. They admit the significance of data in improving services, but the company claims all its data collection and usage practices are in conformity with legal and ethical standards.
What Does it All Mean For AI and User Privacy?
As artificial intelligence becomes a bigger part of our daily lives, the ethical implications of data usage become more critical than ever. These developments in the area of AI have brought tension between the right to privacy and innovation.
While companies argue that large datasets are necessary for training powerful AI models, at the same time, users are worried about how all their private information is being used without their knowledge or consent.
Related Article: Experts Warn About 'Very Large' Netflix Cyber Scam: Your Financial Information Could Be at Risk
© Copyright 2025 Mobile & Apps, All rights reserved. Do not reproduce without permission.