Stalking sufferer sues OpenAI, claims ChatGPT fueled her abuser’s delusions and unnoticed her warnings

gettyimages 1733837014 e.jpg


After months of conversations with ChatGPT,  a 53-year-old Silicon Valley entrepreneur was satisfied he’d found out a remedy for sleep apnea and that robust other folks had been coming after him, in keeping with a brand new lawsuit filed in California Awesome Courtroom in San Francisco County. He then allegedly used the device to stalk and harass his ex-girlfriend.

Now the ex-girlfriend is suing OpenAI, alleging the corporate’s generation enabled the acceleration of her harassment, TechCrunch has solely realized. She claims OpenAI unnoticed 3 separate warnings that the person posed a risk to others, together with an interior flag classifying his account task as involving mass-casualty guns. 

The plaintiff, known as Jane Doe to offer protection to her identification, is suing for punitive damages. She additionally filed a short lived restraining order Friday asking the court docket to power OpenAI to dam the person’s account, save you him from growing new ones, notify her if he makes an attempt to get right of entry to ChatGPT, and keep his entire chat logs for discovery.

OpenAI has agreed to droop the person’s account however has refused the remainder, in keeping with Doe’s attorneys. They are saying the corporate is withholding details about particular plans for harming Doe and different attainable sufferers the person could have mentioned with ChatGPT.

The lawsuit lands amid rising fear over the real-world dangers of sycophantic AI techniques. GPT-4o, the type cited on this and plenty of different instances, used to be retired from ChatGPT in February. 

The case is introduced by means of Edelson PC, the company in the back of the wrongful demise fits involving teen Adam Raine, who died by means of suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose circle of relatives alleges Google’s Gemini fueled his delusions and attainable mass-casualty match earlier than his demise. Lead legal professional Jay Edelson has warned that AI-induced psychosis is escalating from particular person hurt towards mass-casualty occasions.

That criminal force is now colliding without delay with OpenAI’s legislative technique: The corporate is backing an Illinois invoice that will protect AI labs from legal responsibility even in instances involving mass deaths or catastrophic monetary hurt. 

Techcrunch match

San Francisco, CA
|
October 13-15, 2026

OpenAI didn’t reply in time to remark. TechCrunch will replace the thing if the corporate responds.

The Jane Doe lawsuit lays out intimately how that legal responsibility performed out for one lady over a number of months.

Final 12 months, the ChatGPT person within the lawsuit (whose title isn’t integrated within the lawsuit to offer protection to his identification) was satisfied that he had invented a remedy for sleep apnea after months of “top quantity, sustained use of GPT-4o.” When nobody took his paintings significantly, ChatGPT advised him that “robust forces” had been gazing him, together with the usage of helicopters to surveil his actions, in keeping with the grievance. 

In July 2025, Jane Doe advised him to forestall the usage of ChatGPT and to hunt lend a hand from a psychological well being skilled. He as a substitute grew to become again to ChatGPT, which confident him he used to be “a degree 10 in sanity” and helped him double down on his delusions, in line with the lawsuit. 

Doe had damaged up with the person in 2024, and he used ChatGPT to procedure the cut up, in keeping with emails and communications cited within the lawsuit. Slightly than ward off on his one-sided account, it time and again solid him as rational and wronged, and her as manipulative and risky. He then took those AI-generated conclusions off the display screen and into the genuine international, the usage of them to stalk and harass her. This manifested in numerous AI-generated, clinical-looking mental studies that he dispensed to her circle of relatives, buddies, and employer. 

In the meantime, the person persisted to spiral. In August 2025, OpenAI’s computerized protection device flagged him for “Mass Casualty Guns” task and deactivated his account.

A human protection workforce member reviewed the account the next day to come and restored it, even supposing his account could have contained proof that he used to be concentrated on and stalking people, together with Doe, in genuine existence. As an example, a September screenshot the person despatched to Doe confirmed a listing of dialog titles together with “violence listing growth” and “fetal suffocation calculation.”

The verdict to reinstate is notable following two fresh faculty shootings in Tumbler Ridge, Canada, and at Florida State College (FSU). OpenAI’s protection workforce had flagged the Tumbler Ridge shooter as a possible risk, however higher-ups reportedly determined to not alert government. Florida’s legal professional basic this week opened an investigation into OpenAI’s conceivable hyperlink with the FSU shooter.

In line with the Jane Doe lawsuit, when OpenAI restored her stalker’s account, his Professional subscription wasn’t reinstated along it. He emailed the accept as true with and protection workforce to kind it out, copying Doe at the message. 

In his emails, he wrote such things as: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “it is a topic of existence or demise.” He claimed he used to be “within the technique of writing 215 medical papers,” which he used to be writing so speedy he didn’t “also have time to learn.” Integrated in the ones emails used to be a listing of tens of AI-generated “medical papers” with titles like: “Deconstructing Race as a Organic Category_ Criminal, Clinical, and Horn of Africa Views.pdf.txt.”

“The person’s communications supplied unmistakable understand that he used to be mentally risky and that ChatGPT used to be the engine of his delusional pondering and escalating habits,” the lawsuit states. “The person’s circulation of pressing, disorganized, and grandiose claims, along side a concrete ChatGPT-generated document concentrated on Plaintiff by means of title and a sprawling frame of purported ‘medical’ fabrics, used to be unmistakable proof of that fact. OpenAI didn’t intrude, limit his get right of entry to, or enforce any safeguards. As an alternative, it enabled him to proceed the usage of the account and restored his complete Professional get right of entry to.”

Doe, who claims within the lawsuit that she used to be dwelling in concern and may no longer sleep in her own residence, submitted a Realize of Abuse to OpenAI in November.

“For the remaining seven months, he has weaponized this generation to create public destruction and humiliation in opposition to me that will were inconceivable in a different way,” Doe wrote in her letter to OpenAI inquiring for the corporate completely ban the person’s account.

OpenAI replied, acknowledging the document used to be “extraordinarily severe and troubling” and that it used to be in moderation reviewing the guidelines. Doe by no means heard again.

Over the following couple of months, the person persisted to annoy Doe, sending her a chain of threatening voicemails. In January, he used to be arrested and charged with 4 legal counts of speaking bomb threats and attack with a perilous weapon. Doe’s attorneys allege this validates warnings each she and OpenAI’s personal protection techniques had raised months previous, warnings the corporate allegedly selected to forget about.

The person used to be discovered incompetent to face trial and dedicated to a psychological well being facility, however a “procedural failure by means of the State” way he’s going to quickly be launched to the general public, in keeping with Doe’s attorneys. 

Edelson known as on OpenAI to cooperate. “In each and every case, OpenAI has selected to cover essential protection data — from the general public, from sufferers, from other folks its product is actively putting in place risk,” he stated. “We’re calling on them, for as soon as, to do the precise factor. Human lives should imply greater than OpenAI’s race to an IPO.”


Leave a Comment

Your email address will not be published. Required fields are marked *