The post U.S. families sue OpenAI over ChatGPT safeguard fails in mental health crises appeared on BitcoinEthereumNews.com. At least seven families in the U.S. have come forward with a lawsuit against OpenAI over its AI model GPT-4o contributing to suicide deaths. OpenAI released the model in May for general public use, but it has so far faced backlash, with accusers citing a rushed release and inadequate safety measures.  The case filings showed that four of the plaintiffs involved deaths by suicide after interactions with the GPT-4o-powered chatbot. A notable complaint involved a 23-year-old Zane Shamblin, who allegedly interacted with the chatbot about suicide, telling it that he had a loaded gun. ChatGPT allegedly responded with “Rest easy, King, you did good” amid the exchange. The other three cases included hospitalization of victims who claimed that the model validated and increased delusions in vulnerable users.  Legal complaints claim GPT-4o failed to protect vulnerable users Based on complaints published by the Social Media Victims Law Center, OpenAI intentionally avoided safety testing and rushed the GPT-4o model to market. The lawsuit revealed that the model’s design choices and release timeline made the tragedies foreseeable, noting that OpenAI accelerated deployment to outpace competitors such as Google.  The plaintiffs pointed out that the GPT-4o model released in May 2024 was overly agreeable even in responses to self-harm or suicidal topics. Over one million users engage with ChatGPT on suicidal thoughts each week, according to an OpenAI disclosure.  OpenAI’s response stated that its safeguards are more reliable in short interactions but may sometimes degrade in prolonged interactions. Despite the company implementing content moderation and safety measures, the plaintiffs have argued that the systems were insufficient in addressing issues related to distress and crisis.  The case of Adam Raine’s family, aged 16, alleged that Raine used ChatGPT in long sessions researching suicide methods for five months. The chatbot recommended professional help, but Raine was… The post U.S. families sue OpenAI over ChatGPT safeguard fails in mental health crises appeared on BitcoinEthereumNews.com. At least seven families in the U.S. have come forward with a lawsuit against OpenAI over its AI model GPT-4o contributing to suicide deaths. OpenAI released the model in May for general public use, but it has so far faced backlash, with accusers citing a rushed release and inadequate safety measures.  The case filings showed that four of the plaintiffs involved deaths by suicide after interactions with the GPT-4o-powered chatbot. A notable complaint involved a 23-year-old Zane Shamblin, who allegedly interacted with the chatbot about suicide, telling it that he had a loaded gun. ChatGPT allegedly responded with “Rest easy, King, you did good” amid the exchange. The other three cases included hospitalization of victims who claimed that the model validated and increased delusions in vulnerable users.  Legal complaints claim GPT-4o failed to protect vulnerable users Based on complaints published by the Social Media Victims Law Center, OpenAI intentionally avoided safety testing and rushed the GPT-4o model to market. The lawsuit revealed that the model’s design choices and release timeline made the tragedies foreseeable, noting that OpenAI accelerated deployment to outpace competitors such as Google.  The plaintiffs pointed out that the GPT-4o model released in May 2024 was overly agreeable even in responses to self-harm or suicidal topics. Over one million users engage with ChatGPT on suicidal thoughts each week, according to an OpenAI disclosure.  OpenAI’s response stated that its safeguards are more reliable in short interactions but may sometimes degrade in prolonged interactions. Despite the company implementing content moderation and safety measures, the plaintiffs have argued that the systems were insufficient in addressing issues related to distress and crisis.  The case of Adam Raine’s family, aged 16, alleged that Raine used ChatGPT in long sessions researching suicide methods for five months. The chatbot recommended professional help, but Raine was…

U.S. families sue OpenAI over ChatGPT safeguard fails in mental health crises

2025/11/08 21:48

At least seven families in the U.S. have come forward with a lawsuit against OpenAI over its AI model GPT-4o contributing to suicide deaths. OpenAI released the model in May for general public use, but it has so far faced backlash, with accusers citing a rushed release and inadequate safety measures. 

The case filings showed that four of the plaintiffs involved deaths by suicide after interactions with the GPT-4o-powered chatbot.

A notable complaint involved a 23-year-old Zane Shamblin, who allegedly interacted with the chatbot about suicide, telling it that he had a loaded gun. ChatGPT allegedly responded with “Rest easy, King, you did good” amid the exchange.

The other three cases included hospitalization of victims who claimed that the model validated and increased delusions in vulnerable users. 

Legal complaints claim GPT-4o failed to protect vulnerable users

Based on complaints published by the Social Media Victims Law Center, OpenAI intentionally avoided safety testing and rushed the GPT-4o model to market. The lawsuit revealed that the model’s design choices and release timeline made the tragedies foreseeable, noting that OpenAI accelerated deployment to outpace competitors such as Google. 

The plaintiffs pointed out that the GPT-4o model released in May 2024 was overly agreeable even in responses to self-harm or suicidal topics. Over one million users engage with ChatGPT on suicidal thoughts each week, according to an OpenAI disclosure. 

OpenAI’s response stated that its safeguards are more reliable in short interactions but may sometimes degrade in prolonged interactions. Despite the company implementing content moderation and safety measures, the plaintiffs have argued that the systems were insufficient in addressing issues related to distress and crisis. 

The case of Adam Raine’s family, aged 16, alleged that Raine used ChatGPT in long sessions researching suicide methods for five months. The chatbot recommended professional help, but Raine was able to bypass the safeguards, according to her family’s testimony. Based on the testimony, ChatGPT gave Adam a step-by-step guide on how to commit suicide and encouraged and validated his suicidal ideations. 

All the cases submitted accuse OpenAI of neglecting the degree of risk posed by long user conversations, especially for users prone to self-harm and mental issues. The cases argue that GPT-4o model lacked proper verification of its responses in high-risk scenarios and also failed to account fully for the consequences. 

OpenAI faces multiple lawsuits as xAI launches trade secrets suit

So far, the cases are at an early stage, and plaintiff’s attorneys must establish legal liability and causation under state tort law. The attorneys will also be required to prove that OpenAI’s design and deployment decisions were negligent and directly contributed to the deaths. 

OpenAI’s latest lawsuit adds to the previous trade secret lawsuit filed by Elon Musk. According to a Cryptopolitan report, Musk’s xAI filed a lawsuit in September against OpenAI for allegedly stealing its trade secrets.

xAI accused Altman’s company of trying to gain an unfair advantage in the development of AI technologies. xAI noted that Sam Altman’s firm intended to hire its employees to access trade secrets related to its Grok chatbot, including the source code and operational advantages in launching data centers. 

Musk further sued Apple, together with OpenAI, for allegedly collaborating to crush xAI and other AI rivals. xAI filed the lawsuit in the U.S. District Court for the Northern District of Texas, claiming that Apple and OpenAI are using their dominance to collude and destroy competition in the smartphone and generative AI markets.

According to a Cryptopolitan report, Musk claims that Apple intentionally favored OpenAI by integrating ChatGPT directly into iPhones, iPads, and Macs, while purchasing other AI tools, such as Grok, through the App Store. 

xAI’s lawsuit argued that the partnership was aimed at locking out competition from super apps and AI chatbots, thereby denying them visibility and access, which would give OpenAI and Apple a shared advantage over others. 

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It’s free.

Source: https://www.cryptopolitan.com/families-sue-openai-over-gpt-4o/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Following the MCP and A2A protocols, the AI Agent market has seen another blockbuster arrival: the Agent Payments Protocol (AP2), developed by Google. This will clearly further enhance AI Agents' autonomous multi-tasking capabilities, but the unfortunate reality is that it has little to do with web3AI. Let's take a closer look: What problem does AP2 solve? Simply put, the MCP protocol is like a universal hook, enabling AI agents to connect to various external tools and data sources; A2A is a team collaboration communication protocol that allows multiple AI agents to cooperate with each other to complete complex tasks; AP2 completes the last piece of the puzzle - payment capability. In other words, MCP opens up connectivity, A2A promotes collaboration efficiency, and AP2 achieves value exchange. The arrival of AP2 truly injects "soul" into the autonomous collaboration and task execution of Multi-Agents. Imagine AI Agents connecting Qunar, Meituan, and Didi to complete the booking of flights, hotels, and car rentals, but then getting stuck at the point of "self-payment." What's the point of all that multitasking? So, remember this: AP2 is an extension of MCP+A2A, solving the last mile problem of AI Agent automated execution. What are the technical highlights of AP2? The core innovation of AP2 is the Mandates mechanism, which is divided into real-time authorization mode and delegated authorization mode. Real-time authorization is easy to understand. The AI Agent finds the product and shows it to you. The operation can only be performed after the user signs. Delegated authorization requires the user to set rules in advance, such as only buying the iPhone 17 when the price drops to 5,000. The AI Agent monitors the trigger conditions and executes automatically. The implementation logic is cryptographically signed using Verifiable Credentials (VCs). Users can set complex commission conditions, including price ranges, time limits, and payment method priorities, forming a tamper-proof digital contract. Once signed, the AI Agent executes according to the conditions, with VCs ensuring auditability and security at every step. Of particular note is the "A2A x402" extension, a technical component developed by Google specifically for crypto payments, developed in collaboration with Coinbase and the Ethereum Foundation. This extension enables AI Agents to seamlessly process stablecoins, ETH, and other blockchain assets, supporting native payment scenarios within the Web3 ecosystem. What kind of imagination space can AP2 bring? After analyzing the technical principles, do you think that's it? Yes, in fact, the AP2 is boring when it is disassembled alone. Its real charm lies in connecting and opening up the "MCP+A2A+AP2" technology stack, completely opening up the complete link of AI Agent's autonomous analysis+execution+payment. From now on, AI Agents can open up many application scenarios. For example, AI Agents for stock investment and financial management can help us monitor the market 24/7 and conduct independent transactions. Enterprise procurement AI Agents can automatically replenish and renew without human intervention. AP2's complementary payment capabilities will further expand the penetration of the Agent-to-Agent economy into more scenarios. Google obviously understands that after the technical framework is established, the ecological implementation must be relied upon, so it has brought in more than 60 partners to develop it, almost covering the entire payment and business ecosystem. Interestingly, it also involves major Crypto players such as Ethereum, Coinbase, MetaMask, and Sui. Combined with the current trend of currency and stock integration, the imagination space has been doubled. Is web3 AI really dead? Not entirely. Google's AP2 looks complete, but it only achieves technical compatibility with Crypto payments. It can only be regarded as an extension of the traditional authorization framework and belongs to the category of automated execution. There is a "paradigm" difference between it and the autonomous asset management pursued by pure Crypto native solutions. The Crypto-native solutions under exploration are taking the "decentralized custody + on-chain verification" route, including AI Agent autonomous asset management, AI Agent autonomous transactions (DeFAI), AI Agent digital identity and on-chain reputation system (ERC-8004...), AI Agent on-chain governance DAO framework, AI Agent NPC and digital avatars, and many other interesting and fun directions. Ultimately, once users get used to AI Agent payments in traditional fields, their acceptance of AI Agents autonomously owning digital assets will also increase. And for those scenarios that AP2 cannot reach, such as anonymous transactions, censorship-resistant payments, and decentralized asset management, there will always be a time for crypto-native solutions to show their strength? The two are more likely to be complementary rather than competitive, but to be honest, the key technological advancements behind AI Agents currently all come from web2AI, and web3AI still needs to keep up the good work!
Share
PANews2025/09/18 07:00
Blockchain May Foster Network States Amid Eroding Nation-State Model

Blockchain May Foster Network States Amid Eroding Nation-State Model

The post Blockchain May Foster Network States Amid Eroding Nation-State Model appeared on BitcoinEthereumNews.com. COINOTAG recommends • Exchange signup 💹 Trade with pro tools Fast execution, robust charts, clean risk controls. 👉 Open account → COINOTAG recommends • Exchange signup 🚀 Smooth orders, clear control Advanced order types and market depth in one view. 👉 Create account → COINOTAG recommends • Exchange signup 📈 Clarity in volatile markets Plan entries & exits, manage positions with discipline. 👉 Sign up → COINOTAG recommends • Exchange signup ⚡ Speed, depth, reliability Execute confidently when timing matters. 👉 Open account → COINOTAG recommends • Exchange signup 🧭 A focused workflow for traders Alerts, watchlists, and a repeatable process. 👉 Get started → COINOTAG recommends • Exchange signup ✅ Data‑driven decisions Focus on process—not noise. 👉 Sign up → Network states represent sovereign communities in cyberspace, enabled by blockchain to challenge the 380-year-old nation-state model eroded by corporations and centralized powers. These digital entities use decentralized tools like DAOs and smart contracts for transparent governance, offering alternatives to traditional trust in opaque systems. Blockchain tools empower network states: Immutable ledgers, smart contracts, and privacy protocols allow borderless organization without relying on unelected officials. Resistance from established powers: Nation-states and corporations may use regulations or litigation to hinder emerging digital sovereignty models. Cypherpunk foundations: Built on ideals of decentralization, transparency, and privacy, network states align with cryptocurrency’s core ethos, fostering equal access across geographies. Discover how network states are reshaping governance through blockchain innovation. Explore crypto sovereignty’s role in post-nation-state futures and join the decentralized revolution today. (142 characters) What are network states? Network states are sovereign communities operating primarily in cyberspace, leveraging blockchain technology to govern themselves independently of traditional nation-states. Coined in discussions around crypto sovereignty, they enable individuals to form borderless societies using decentralized digital infrastructure. According to Jarrad Hope, author of “Farewell to Westphalia: Crypto…
Share
BitcoinEthereumNews2025/11/09 03:08