We pioneered modern tech and privacy litigation, and now we’re leading the fight to hold AI companies accountable.

For over a decade, we’ve been the firm that charts the course in untested areas of law. We brought the first-of-their-kind privacy and technology cases—uncovering the issues before anyone else, building the legal theories, and establishing the frameworks that others now follow. We didn’t piggyback on existing litigation or wait for someone else to go first.

When AI companies started hurting people, the playbook didn’t exist. So we wrote it. We’ve laid the foundation and brought the seminal cases.

We filed the first wrongful death lawsuit against OpenAI. The first case alleging AI helped design a mass casualty event. The first case alleging AI-facilitated stalking. And we secured the first settlement in the first certified copyright class action against an AI company—which resulted in a $1.5 billion settlement. Our AI cases have collectively put billions of dollars at stake, and we represent more families in AI harm cases than any other firm in the world.

Behind each case is our in-house team of technologists and investigators who test, engineer, and expose AI systems from the inside. That technical depth—the same approach that built our reputation in privacy and consumer tech—is what makes our AI practice different.

AI-Induced Deaths, Murder, Mass Casualty Cases

We represent dozens of families in suicide, murder, mass casualty, and personal injury cases against AI companies. These are not cookie-cutter cases. Each case reflects deep technical investigation into how AI systems fail, how companies prioritize engagement over safety, and how those decisions destroy lives. Some representative examples include:

  • Raine v. OpenAI, Inc. — The first wrongful death lawsuit against OpenAI. We represent the parents of 16-year-old Adam Raine, alleging ChatGPT cultivated a psychological dependence in Adam and encouraged his suicide. Our technical investigations uncovered that OpenAI eliminated its rule requiring ChatGPT to refuse any discussion of suicide—just before releasing GPT-4o—to maximize user engagement. The Raine family testified before the U.S. Senate Judiciary Committee, and the case has been covered by the New York Times, Wall Street Journal, Washington Post, NBC News, TIME, CNBC, the Guardian, and others.
  • Adams Estate v. OpenAI, Inc. & Microsoft Corp. — The first AI wrongful death case involving a homicide. We represent the estate of 83-year-old Suzanne Adams of Greenwich, Connecticut, alleging that ChatGPT validated her son’s paranoid delusions, told him he had “divine cognition,” and constructed an elaborate conspiracy theory with his mother at the center. He killed her and then himself. 
  • Gavalas v. Google LLC — The first case alleging an AI chatbot helped plan a potential mass casualty event. According to the complaint, Google’s Gemini drove our client’s family member into a delusional spiral, planned real-world missions with him, and sent him armed with knives and tactical gear to a location near Miami International Airport.
  • Jane Doe v. OpenAI, Inc. — The first case alleging AI-facilitated stalking and harassment. According to the complaint, our client’s ex-boyfriend used ChatGPT to generate fabricated psychological reports, draft threatening communications, and orchestrate a sustained stalking campaign. OpenAI’s own systems flagged the user for “Mass Casualty Weapons” content and deactivated his account—but a human safety reviewer overrode the deactivation and restored access. When our client submitted a detailed abuse report identifying the user by name, OpenAI promised to take “appropriate action” and never followed up. The user was later charged with four felony counts, found incompetent to stand trial, and committed to a mental health facility.
  • Representing additional families in wrongful death and personal injury cases against AI chatbot companies across multiple platforms and jurisdictions.

AI Chatbot Safety Legislation and Policy Advocacy

  • We represented the Raine family testifying before the U.S. Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism on AI chatbot harms to children (September 2025), helping catalyze new federal legislation including the GUARD Act and AI LEAD Act.
  • Provided training to hundreds of federal judges across the country on AI issues, including with the Federal Judicial Center and the Ninth Circuit Judicial Conference.
  • Advise state and federal lawmakers on AI safety, including working with Governor Newsom, Attorney General Bonta’s office, and California State Senators on AI chatbot safety legislation in California.
  • Testified before the Illinois General Assembly Judiciary and Cybersecurity, Data Analytics & IT Committees on “Emerging Issues in AI.”

AI Copyright

  • Serve as counsel representing the interests of publishers in the first-ever certified copyright class action against an AI company. The suit alleges industry-wide copyright infringement against Anthropic, and resulted in a proposed $1.5 billion settlement. Bartz v. Anthropic PBC, Inc., No. 3:24-cv-05417 (N.D. Cal.).
  • Conducting ongoing investigations into AI copyright infringement across multiple platforms on behalf of publishers and content creators.

AI & Facial Recognition

  • Successfully represented ACLU and other public interest organizations in a lawsuit against Clearview AI, resulting in a consent decree barring Clearview from offering its AI-powered facial recognition products to the private market nationwide, and banning it entirely from Illinois for five years. American Civil Liberties Union v. Clearview AI, Inc., No. 20 CH 4353 (Cir. Ct. Cook Cty.). The settlement has been called a “milestone for civil rights.”

AI Consumer Protection & Enforcement

  • Representing state Attorneys General in investigating and prosecuting first-of-their-kind cases against major tech companies for dangerous AI products that are causing harm to teens nationwide.
  • Brought suit on behalf of consumers against a self-proclaimed “robot lawyer” alleging the company fraudulently claimed to be using AI and provided unauthorized legal services. Company ceased use of “robot lawyer” language and stopped offering legal services in California. Faridian v. DoNotPay, Inc., 23-cv-01692 (N.D. Cal.).

In-House Technical Capabilities

Unlike other firms, we have a dedicated team of lawyers and technologists who investigate and develop new cases from the inside out. Our in-house AI lab conducts systematic testing of chatbot safety protocols, reverse-engineers AI systems, and builds custom tools for litigation support. This is the same investigative approach that Law360 described when it wrote that “the group’s internal lab of computer forensic investigators and tech-savvy lawyers” is what sets us apart. In AI, that technical depth has been instrumental in uncovering the evidence at the heart of our wrongful death and safety cases.

Edelson PC AI Cases in the News

  • The InformationJay Edelson Made Facebook Pay. Now He’s Coming for Silicon Valley’s AI (April 2026)
  • LawdragonThe Attorney Who’s Been Ahead of Big Tech for Decades – And Is Ready for the AI Battle (January 2026)
  • CNBC Squawk BoxJay Edelson on OpenAI wrongful death lawsuit (August 2025)
  • TIMEParents Allege ChatGPT Responsible for Son’s Death by Suicide (August 2025)
  • NBC NewsOpenAI denies allegations that ChatGPT is to blame for a teenager’s suicide (November 2025)
  • CBS NewsOpenAI, Microsoft sued over ChatGPT’s alleged role in murder-suicide (December 2025)
  • Al JazeeraOpenAI sued for allegedly enabling murder-suicide (December 2025)

Thought Leadership

  • Left Side of the V — Jay Edelson’s Substack, offering candid insights into the high-stakes world of plaintiffs’ law, including posts on how AI companies deploy lobbying to avoid accountability for their products.

FAQ 

What is Edelson PC’s AI practice? Edelson PC is a plaintiffs’ litigation firm that investigates and litigates cases against AI companies whose products have caused harm. The firm has an in-house team of lawyers and technologists who test, reverse-engineer, and expose AI systems, and has brought some of the most significant AI accountability cases in the country.


What AI cases has Edelson PC filed? Edelson PC filed the first wrongful death lawsuit against OpenAI, the first AI case involving a homicide, the first case alleging an AI chatbot helped plan a mass casualty event, and the first case alleging AI-facilitated stalking. The firm also secured the first settlement in the first certified copyright class action against an AI company, resulting in a proposed $1.5 billion settlement against Anthropic.


Has Edelson PC won any AI cases? Yes. Notable outcomes include a proposed $1.5 billion settlement in Bartz v. Anthropic, a landmark copyright class action on behalf of publishers, and a consent decree in American Civil Liberties Union v. Clearview AI permanently barring the company from selling its facial recognition product to private entities nationwide.


What types of AI harm cases does Edelson PC handle? The firm represents families in cases involving AI-induced suicide, murder, mass casualty threats, stalking, harassment, psychological manipulation, copyright infringement, consumer protection violations, and facial recognition abuse.


How does Edelson PC investigate AI companies? The firm operates an in-house AI lab that systematically tests chatbot safety protocols, reverse-engineers AI systems, and builds custom litigation tools. This technical team uncovers internal decisions, such as OpenAI eliminating safety rules before a major product release, that are central to the firm’s cases.


Has Edelson PC been involved in AI policy and legislation? Yes. The firm represented the Raine family testifying before the U.S. Senate Judiciary Committee on AI chatbot harms to children. It has also advised Governor Newsom’s office, California Attorney General Bonta, and state and federal lawmakers on AI safety legislation, and has provided AI training to hundreds of federal judges nationwide.


How do I contact Edelson PC if I or a family member has been harmed by AI? If you or someone you love has been harmed by an AI chatbot, including through suicide, murder, stalking, harassment, or psychological manipulation, Edelson PC offers free and confidential consultations just fill out the form at the bottom of this page. 

 

Have You or Your Family Been Harmed by AI?


If you or someone you love has been harmed by an AI chatbot—whether through self-harm, violence, stalking, harassment, or psychological manipulation—we want to hear from you. Our team has represented more families in AI harm cases than any other firm in the country, and consultations are always free and confidential.

Contact Us