After months of conversations with ChatGPT, a 53-year-old Silicon Valley entrepreneur became convinced he had discovered a cure for sleep apnea and that powerful people were after him, according to a new lawsuit filed in the Superior Court of California in the District of San Francisco. He then allegedly used the tool to stalk and harass his ex-girlfriend.
Now his ex-girlfriend is suing OpenAI, alleging that the company’s technology enabled the acceleration of her harassment, TechCrunch has learned exclusively. It alleges that OpenAI ignored three separate warnings that a user posed a threat to others, including an internal flag that classified his account activity as involving weapons resulting in mass casualties.
The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. It also filed a temporary restraining order on Friday asking the court to force OpenAI to block a user’s account, prevent him from creating new accounts, notify it if he tries to access ChatGPT, and keep full chat logs for discovery.
OpenAI agreed to suspend the user’s account but rejected the rest, according to Doe’s attorney. They say the company is withholding information about specific plans to harm Doe and other potential victims the user may have discussed with ChatGPT.
The lawsuit falls amid growing concern about the real-world dangers of fawning AI systems. GPT-4o, the model mentioned in this and several other cases, was pulled from ChatGPT in February.
The case was brought by Edelson PC, the company behind the wrongful death claims involving teenager Adam Ren, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose family claims Google’s Gemini fueled his delusions and a potential mass casualty event occurred before his death. Lead attorney Jay Edelson warns that AI-induced psychosis is escalating from individual harm to events that lead to mass casualties.
This legal pressure now collides directly with OpenAI’s legislative strategy: the company Support the Illinois bill This would protect AI labs from liability even in cases involving mass deaths or catastrophic financial damage.
TechCrunch event
San Francisco, California
|
October 13-15, 2026
OpenAI did not respond in time to comment. TechCrunch will update the article if the company responds.
The lawsuit against Jane Doe details how this liability affected one woman over the course of several months.
Last year, the ChatGPT user in the lawsuit (whose name is not included in the lawsuit to protect his identity) became convinced he had invented a cure for sleep apnea after months of “continuous high-volume use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were monitoring him, including using helicopters to monitor his activities, according to the complaint.
In July 2025, Jane Doe urged him to stop using ChatGPT and seek help from a mental health professional. Instead, he returned to ChatGPT, which assured him he was a “level 10 mental health officer” and helped him double down on his delusions, according to the lawsuit.
Doe broke up with the user in 2024, and used ChatGPT to process the breakup, according to emails and communications mentioned in the lawsuit. Instead of backing away from his one-sided narrative, she repeatedly portrays him as rational and oppressed, while she is manipulative and unstable. Then he took these AI-generated inferences off the screen and into the real world, and used them to stalk and harass them. This was demonstrated in numerous AI-generated clinical psychological reports that he distributed to her family, friends and employer.
Meanwhile, the user continued the spiral. In August 2025, OpenAI’s automated security system flagged him for “mass casualty weapons” activity and deactivated his account.
A human safety team member reviewed the account the next day and restored it, even though his account may have contained evidence suggesting he was targeting and stalking individuals, including Doe, in real life. For example, a screenshot a user sent to Doe in September showed a list of chat titles including “Expand Violence List” and “Fetal Asphyxiation Account.”
The decision to return to his position is notable after two recent shooting incidents at two schools in Tumbler Ridge, Canada, and at Florida State University (FSU). OpenAI’s safety team has flagged the Tumbler Ridge shooter as a potential threat, but one of the highest levels It is said I decided not to alert the authorities. This week, Florida’s attorney general opened an investigation into a possible link between OpenAI and the FSU shooter.
According to Jane Doe’s lawsuit, when OpenAI recovered her stalker’s account, his Pro subscription was not reinstated alongside him. He emailed the Trust and Safety team to resolve the issue, copying Doe in the message.
In his emails, he wrote things like: “I need help very quickly, please. Please call me!” “This is a matter of life and death.” He claimed that he was “in the process of writing 215 scientific papers,” and was writing them so quickly that he “didn’t even have time to read them.” Those emails included a list of dozens of AI-generated “scientific papers” with titles such as: “Deconstructing Race as a Biological Category_Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”
“The user’s communications provided unambiguous notice that he was mentally unstable and that ChatGPT was the driver of his delusional thinking and escalating behavior,” the lawsuit states. “The influx of urgent, disorganized, and exaggerated user claims, coupled with a concrete report generated by ChatGPT targeting the plaintiff by name and a sprawling collection of purportedly ‘scientific’ materials, was unequivocal evidence of this reality. OpenAI did not intervene, restrict his access, or implement any safeguards. Instead, it enabled him to continue using the account and regain his full access to Pro.”
Doe, who claims in the lawsuit that she was living in fear and unable to sleep in her home, filed a notice of breach with OpenAI in November.
“Over the past seven months, he has used this technology as a weapon to cause public destruction and humiliation against me, which would otherwise be impossible,” Du wrote in her letter to OpenAI asking the company to permanently ban the user’s account.
OpenAI responded, acknowledging that the report was “serious and extremely concerning” and that it was carefully reviewing the information. Didn’t hear any response.
Over the next two months, the user continued to harass Doe, sending her a series of threatening voicemails. In January, he was arrested and charged with four felony counts of making bomb threats and assault with a deadly weapon. Doe’s lawyers claim this validates warnings she and OpenAI’s safety systems issued months ago, warnings the company allegedly chose to ignore.
User was found incompetent to stand trial and was placed in a mental institution, but a “procedural failure by the state” means he will soon be released to the public, according to Doe’s attorney.
Edelson called on OpenAI to cooperate. “In each case, OpenAI chose to hide critical safety information — from the public, from victims, and from the people its product actively put at risk,” he said. “We’re calling on them, for once, to do the right thing. Human lives should mean more than just OpenAI’s race to IPO.”









