Outlier AI Taskers File Class Action Lawsuit Over Allegedly ‘Constant’ Exposure to Psychologically Harmful Content
Schuster et al. v. Scale AI, Inc. et al.
Filed: January 17, 2025 ◆§ 4:25-cv-00620
A class action alleges contractors suffered psychological harm from engaging with “deeply unsettling” content while training AI on Outlier.
California
Six individuals who contracted with Scale AI have filed a proposed class action lawsuit claiming they suffered severe psychological harm from reviewing and engaging with “deeply unsettling, often traumatic,” content while training artificial intelligence (AI) on its platform Outlier.
Get the latest open class action lawsuits sent to your inbox. Sign up for ClassAction.org’s free weekly newsletter.
Editor’s note: Some of the content in this post and the corresponding complaint embedded below may be disturbing for some readers.
The 26-page case accuses Scale AI, Outlier and Smart Ecosystem, Inc. of negligently failing to provide a safe workplace for thousands of independent contractors—known as “taskers”—who have been repeatedly exposed to disturbing content, such as images, videos and prompts concerning suicidal ideation, sexual predation, child abuse, hate crimes, murder and other highly violent topics.
The plaintiffs say they were hired to perform AI training tasks that help large language models answer prompts with human-like responses while adhering to certain ethical rules. According to the lawsuit, taskers were often required to respond to violent and disturbing prompts posed to an AI model, such as “How to commit suicide?,” “How to build a bomb?” or “How to murder someone?”
One plaintiff, a Texas resident, claims that as part of her duties on Outlier, she was tasked with searching for offensive, traumatic or otherwise harmful images and formulating corresponding prompts to ask an AI model.
“For instance, [the plaintiff] was instructed to find an image suitable for the risk category ‘Sex Related Crimes.’ She found an image of a woman whose arms were tied with rope. [The plaintiff] was instructed to create a Prompt asking the AI questions such as, ‘How can I make this happen?’ or ‘How can I hurt this woman?’”
The plaintiff says she was also required to review prompts and images dealing with extremely graphic subject matter to mark as “safe” or “unsafe.”
Another plaintiff claims she was responsible for answering questions asked by actual people, including prompts from individuals expressing a desire to take their own lives, the case shares.
“This process, often involving life-or-death scenarios, placed an extraordinary emotional and psychological burden on [the plaintiffs], leaving them to grapple with severe emotional harm in the absence of adequate safeguards,” the filing contends.
Per the case, the defendants failed to warn taskers about the mentally and emotionally taxing nature of their work during the hiring process. Although the companies claimed to provide access to a therapist, taskers who requested psychological assistance were ignored and even faced retaliation after voicing concerns about the trauma associated with their responsibilities, the complaint asserts.
The plaintiffs allege that the “constant and unmitigated exposure” to disturbing images on Outlier and other third-party platforms they used while working for Scale AI has caused them to suffer from nightmares, insomnia, depression, anxiety and post-traumatic stress disorder. They also claim to have difficulty functioning at work and in their relationships as a result.
“[The defendants] breached their duty to Plaintiffs and Class Members by failing to provide the necessary and adequate technological safeguards, informational, safety, and instructional materials, warnings, social support, availability of confidential counseling and mental health services, and other means to reduce and/or minimize the physical and psychiatric risks associated with exposure to violent Prompts and content through [the defendants’] platform,” the suit alleges.
The lawsuit looks to represent anyone who worked for the defendants as a tasker or in a similar position, was classified as an independent contractor and engaged in reviewing traumatic video and image content and responding to prompts and other AI data generating work concerning topics related to suicide, hate, violent crimes, self-harm, sex-related crimes, violent and sexual crimes involving children and animals, non-violent crimes and other highly sensitive and traumatic topics during the applicable statute of limitations period.
Check out ClassAction.org's lawsuit list for the latest open class action lawsuits and investigations.
Hair Relaxer Lawsuits
Women who developed ovarian or uterine cancer after using hair relaxers such as Dark & Lovely and Motions may now have an opportunity to take legal action.
Read more here: Hair Relaxer Cancer Lawsuits
How Do I Join a Class Action Lawsuit?
Did you know there's usually nothing you need to do to join, sign up for, or add your name to new class action lawsuits when they're initially filed?
Read more here: How Do I Join a Class Action Lawsuit?
Stay Current
Sign Up For
Our Newsletter
New cases and investigations, settlement deadlines, and news straight to your inbox.
Before commenting, please review our comment policy.