Scale AI is dealing with its third lawsuit over alleged labor practices in simply over a month, this time from staff claiming they suffered psychological trauma from reviewing disturbing content material with out satisfactory safeguards.
Scale, which was valued at $13.8 billion final 12 months, depends on staff it categorizes as contractors to do duties like score AI mannequin responses.
Earlier this month, a former employee sued alleging she was successfully paid beneath the minimal wage and misclassified as a contractor. A grievance alleging related points was additionally filed in December 2024.
This newest grievance, filed January 17 within the Northern District of California, is a category motion grievance that focuses on the psychological harms allegedly suffered by 6 individuals who labored on Scale’s platform Outlier.
The plaintiffs declare they had been pressured to put in writing disturbing prompts about violence and abuse – together with baby abuse – with out correct psychological assist, struggling retaliation after they sought psychological well being counsel. They are saying they had been misled in regards to the job’s nature throughout hiring and ended up with psychological well being points like PTSD on account of their work. They’re looking for the creation of a medical monitoring program together with new security requirements, plus unspecified damages and lawyer charges.
One of many plaintiffs, Steve McKinney, is the lead plaintiff in that separate December 2024 grievance towards Scale. The identical regulation agency, Clarkson Legislation Agency of Malibu, California, is representing plaintiffs in each complaints.
Clarkson Legislation Agency beforehand filed a category motion swimsuit towards OpenAI and Microsoft over allegedly utilizing stolen information — a swimsuit that was dismissed after being criticized by a district decide for its size and content material. Referencing that case, Joe Osborne, a spokesperson for Scale AI, criticized Clarkson Legislation Agency and mentioned Scale plans “to defend ourselves vigorously.”
“Clarkson Legislation Agency has beforehand – and unsuccessfully – gone after modern tech corporations with authorized claims that had been summarily dismissed in courtroom. A federal courtroom decide discovered that one in every of their earlier complaints was ‘needlessly lengthy’ and contained ‘largely irrelevant, distracting, or redundant data,’” Osborne advised TechCrunch.
Osborne mentioned that Scale complies with all legal guidelines and laws and has “quite a few safeguards in place” to guard its contributors like the power to opt-out at any time, superior discover of delicate content material, and entry to well being and wellness packages. Osborne added that Scale doesn’t tackle tasks which will embrace baby sexual abuse materials.
In response, Glenn Danas, associate at Clarkson Legislation Agency, advised TechCrunch that Scale AI has been “forcing staff to view grotesque and violent content material to coach these AI fashions” and has failed to make sure a secure office.
“We should maintain these massive tech corporations like Scale AI accountable or staff will proceed to be exploited to coach this unregulated expertise for revenue,” Danas mentioned.