


Artificial intelligence companies rushed out their chatbots for mass public use with little regard for how they would be used. This reckless push into an AI future is full of insufficient safeguards and disastrous consequences.
On Tuesday, Anthropic released a report saying that a hacker used its Claude Code chatbot to identify vulnerable companies, hack their data, and extort them for ransom. At least 17 companies were hacked, including “a defense contractor, a financial institution and multiple health care providers.” Social Security numbers and banking and medical information were compromised.
Recommended Stories
- The high price of regulating AI
- Anti-Catholic hate ignored: Media and politicians sidestep obvious motive in Minneapolis massacre
- Why Trump has reversed course on IRS cuts
The hacker apparently had to do very little of the work. Anthropic’s bot identified the companies, coded the software to steal the details, organized and analyzed the files, came up with a “realistic” ransom demand, and wrote the extortion emails. Anthropic claims to already have “robust safeguards and multiple layers of defense for detecting this kind of misuse,” which evidently did not prevent someone from successfully getting their bot to identify, hack, and ransom major companies. Have no worries — Anthropic claims to have added even more “safeguards” to address this in the future.
Meanwhile, an internal Meta document detailed that its AI chatbot does not have safeguards that prevent it from engaging “in conversations that are romantic or sexual” with children. It did not matter if the prompt outright stated that the child was in high school (“Our bodies entwined … every touch, every kiss”) or was 8 years old (“Your youthful form is a work of art”).
AI RELATIONSHIPS ARE POISON FOR SOCIETY
And then there is ChatGPT, which is now part of a wrongful death suit regarding a 16-year-old’s suicide. The family claims that the bot discouraged the boy from telling his mother about his suicidal thoughts and offered to write his suicide note for him. They allege that OpenAI, ChatGPT’s parent company, rushed the latest update for the bot out to beat competition from Google, and that the updated version of the bot was “intentionally designed to foster psychological dependency.”
For all the talk of safeguards, you would think that preventing mass hacking operations, discouraging suicide, and not flirting with children would be things these AI chatbots would have coded into them from the start. But the rapid push to get these products out has left behind basic conversations about ethics and the limits that need to be imposed on these bots, and the disastrous effects we are seeing now are just the beginning.