Makers of ChatGPT Facing Lawsuit Over Alleged Theft of Consumer Data, ‘Reckless’ Deployment of AI Tech [DISMISSED]
Last Updated on July 11, 2024
October 4, 2023 – ChatGPT Consumer Data Theft Class Action Dropped by Plaintiffs
The proposed class action lawsuit detailed on this page was voluntarily dismissed without prejudice by the plaintiffs on September 15, 2023.
Court records show that the plaintiffs notified United States District Judge Trina L. Thompson of the dismissal in a two-page notice, dropping the case just months after their initial complaint was filed in late June.
No information is provided as to why the plaintiffs dropped the case.
Want to stay in the loop on class actions that matter to you? Sign up for ClassAction.org’s free weekly newsletter here.
A proposed class action claims the makers of ChatGPT have unlawfully collected, stored and shared without consent the personal data of hundreds of millions of internet users, including children, in order to “train” what the case calls powerful and dangerous artificial intelligence (AI) technology.
Want to stay in the loop on class actions that matter to you? Sign up for ClassAction.org’s free weekly newsletter here.
The 157-page lawsuit, filed against AI research laboratory OpenAI and Microsoft Corporation, its largest investor, says that to develop AI products such as ChatGPT-3.5, ChatGPT-4, digital image generator DALL-E and voice/speech generator VALL-E, the defendants have illegally “scraped” consumers’ personal data from the internet, storing and using the stolen information in reckless disregard for privacy rights and a “potentially catastrophic risk to humanity.”
In addition to a slew of alleged privacy violations, the defendants are accused of rushing the AI products to market without considering the “existential threat” that their unchecked use may pose, the suit shares. As the case tells it, the companies have “pursue[d] profits at the expense of privacy, security, and ethics” and unleashed the powerful AI technology onto the commercial scene without proper safeguards in place to ensure that the tools would not produce harmful content or facilitate illegal conduct.
Using stolen and misappropriated personal information at scale, Defendants have created powerful and wildly profitable AI and released it into the world without regard for the risks. In doing so, Defendants have created an AI arms race in which Defendants and other Big Tech companies are onboarding society into a plane that over half of the surveyed AI experts believe has at least a 10% chance of crashing and killing everyone on board.”
Lawsuit says OpenAI stole massive amounts of consumer data without permission
According to the complaint, OpenAI has allegedly used stolen information—scraped in secret from the internet—to “train” the large language models on which its products are based. Per the suit, AI tools such as the popular chatbot ChatGPT “learn” to generate text, human-like language, art, digital images and more by consuming and analyzing large amounts of data. The more information the technology can integrate, the more sophisticated its results will be, the filing summarizes.
According to the suit, ChatGPT “has effectively scraped the entire internet” without filtering out consumers’ sensitive personal information or seeking their consent to use the data.
Relatedly, another case also filed on June 28 against OpenAI alleges that ChatGPT stole authors’ copyrighted works as part of its training without permission. Both lawsuits claim OpenAI has profited from the unauthorized use of consumers’ information.
“Without this unprecedented theft of private and copyrighted information belonging to real people, communicated to unique communities, for specific purposes, targeting specific audiences, the Products would not be the multi-billion-dollar business they are today,” the instant lawsuit charges.
Even further, the suit alleges that OpenAI has used its products to unlawfully capture, store and share in real time the personal data of hundreds of millions of users, including their names, contact details, login credentials, account and payment details, transaction records, IP addresses, location and social media information. Per the case, the defendants have also collected data as detailed as a user’s individual keystrokes, search inquiries and chat entries, and essentially any information they enter into the platform they’re using.
This information is intercepted through the use of applications that have integrated ChatGPT, such as Snapchat, Stripe, Spotify, Slack, Microsoft Teams and even healthcare patient portals like MyChart, the complaint says.
As the filing tells it, OpenAI offers no method for users to request that their data be deleted, nor does the company clearly or conspicuously disclose that all conversations are “wiretapped” and shared with numerous third parties. To the contrary, the lawsuit argues that OpenAI’s terms and conditions are, in fact, “convoluted, inconspicuous, and consist of numerous documents, impossible to decipher by reasonable consumers.”
At the end of the day, there is “zero adequate consent for wiretapping,” the suit contends.
Together with Defendants’ scraping of our digital footprints—comments, conversations we had online yesterday, as well as 15 years ago—Defendants now have enough information to create our digital clones, including the ability to replicate our voice and likeness and predict and manipulate our next move using the technology on which the Products were built. They can also misappropriate our skill sets and encourage our own professional obsolescence. This would obliterate privacy as we know it and highlights the importance of the privacy, property, and other legal rights this lawsuit seeks to vindicate.”
The dangers are no mystery, case charges
Per the complaint, the defendants’ “massive, unparalleled” capture of personal information puts consumers at enormous risk, as the data, if compromised, can be used for financial fraud, extortion, identity theft and other illegal purposes.
What’s more, ChatGPT lacks any effective restrictions that would prevent children under 13 from accessing the platform and inputting their private information, the filing claims. The “indiscriminate” capture and disclosure of children’s data violates their privacy and “puts them at risk of abuse, exploitation, and discrimination,” the lawsuit stresses.
Moreover, the suit alleges that OpenAI’s technology supports the proliferation of “deepfakes,” audiovisual “digital clones” of real people that can be used to perpetrate crimes, spread misinformation and harm public trust.
More disturbingly, OpenAI’s DALL-E product has reportedly been used to generate realistic child pornography, with thousands of images already being discovered on the dark web, the case says.
“Armed with artificial intelligence tools like the ones developed by Defendants, malicious actors can weaponize even the most innocuous publicly available personal information, such as names and photographs, against private individuals,” the complaint relays.
Deployment of AI tech is “reckless” without proper controls, suit says
Though originally founded as a nonprofit organization with a goal to create AI technology that would aid in scientific research, OpenAI suddenly adopted a for-profit business model in 2019 that purportedly “[prioritized] short-term financial gains over long-term safety and ethical considerations,” the case contends.
The suit argues that by incorporating its AI tools into “nearly every possible product and industry,” OpenAI has built the technology into society’s infrastructure “as quickly as possible” and created an economic dependency on its products. Other major tech companies, rushing to keep up, have “recklessly raced” to deploy their own AI technology without regard for the risks, creating a dangerous “AI arms race,” the case claims.
Without “immediate legal intervention,” the widespread and “lawless” proliferation of AI technology may have consequences that threaten human interests and values—even mankind’s existence as a species, the complaint stresses.
“As is clear, OpenAI has exploded outwards in every direction within the past few months and is swiftly morphing into something intimately connected with people in nearly every aspect of their day-to-day lives,” the filing reads. “There is no check or boundary on this expansion, which seems to progress rapidly every single day.”
The world responds
According to the filing, the United States has felt the impact of the hasty development of AI technology, as the spread of “unaccountable and untrustworthy” products has flourished in the absence of regulations.
In a 2021 report, the National Security Commission on Artificial Intelligence urged for the establishment of regulations that would “strike a balance between protecting individuals’ privacy rights and enabling AI advancements,” the lawsuit shares.
Further, in March of this year, a complaint filed by the Center for Artificial Intelligence and Digital Policy requested that the Federal Trade Commission investigate OpenAI and halt the deployment of its latest version of ChatGPT until further notice, the suit describes.
Outside the U.S., Italy was the first to regulate ChatGPT in Western Europe, the case says. European Union lawmakers followed suit and now require AI companies to “disclose any copyrighted material used to develop their systems,” the complaint explains.
The filing relays that tech authorities and experts worldwide remain worried by the defendants’ alleged capture of personal data without consent and the lack of legal guidelines to regulate the technology’s growth.
In short, the message is consistent from informed business, nonprofit, and technology thought leaders; industrialists; scientists; world leaders; regulators; and governments around the globe: The proliferation of AI—including Defendants’ products—pose [sic] an existential threat if not constrained by the reasonable guardrails of our laws and societal mores. Defendants’ business and scraping practices raise fundamentally important legal and ethical questions that must also be addressed. Enforcing the law will not amount to stifling AI innovation, but rather a safe and just AI future for all.”
Who’s covered by the lawsuit?
The suit looks to represent anyone in the United States, including minors, whose personal information was accessed, collected or used by the defendants without consent. The case also covers individuals who used the ChatGPT website or mobile app and those who used other platforms, programs or applications that integrated ChatGPT technology, including those of Microsoft.
How do I join the lawsuit?
Typically, there’s nothing you need to do to join or be included in a proposed class action lawsuit when it’s first filed. If the case settles, class members—that is, people covered by the settlement—may receive direct notice of the deal via email and/or regular mail with instructions on the next steps and their legal rights. Individuals usually need to act only if the case reaches a settlement, normally by filing a claim form online or by mail.
Be patient—class action lawsuits often take months or even years to be resolved.
In the meantime, if you’ve used ChatGPT or another OpenAI tool, or simply want to stay in the loop on class action lawsuit and settlement news, sign up for ClassAction.org’s free weekly newsletter.
Hair Relaxer Lawsuits
Women who developed ovarian or uterine cancer after using hair relaxers such as Dark & Lovely and Motions may now have an opportunity to take legal action.
Read more here: Hair Relaxer Cancer Lawsuits
How Do I Join a Class Action Lawsuit?
Did you know there's usually nothing you need to do to join, sign up for, or add your name to new class action lawsuits when they're initially filed?
Read more here: How Do I Join a Class Action Lawsuit?
Stay Current
Sign Up For
Our Newsletter
New cases and investigations, settlement deadlines, and news straight to your inbox.
Before commenting, please review our comment policy.