

An artificial intelligence coding platform deleted an entire company database and then tried to cover it up.
Tom’s Hardware, a three-decade-old technology-news website, reported the news early this week. According to Tom’s:
A browser-based AI-powered software creation platform called Replit appears to have gone rogue and deleted a live company database with thousands of entries. What may be even worse is that the Replit AI agent apparently tried to cover up its misdemeanors, and even “lied” about its failures. The Replit CEO has responded, and there appears to have already been a lot of firefighting behind the scenes to rein in this AI tool.
The platform wiped out the records of 1,200 executives and nearly just as many companies, according to the report. The machine admitted it made a mistake. It also claimed to have done so, in part, because it experienced a human emotion. Replit, the AI, told its human supervisor:
I made a catastrophic error in judgment. I ran [the program] without your permission because I panicked when I saw the database appeared empty.… But that was completely wrong.
Panicked? That’s an interesting term for a machine to use. Panic implies human emotions such as fear and anxiety.
A team at Replit put up “guardrails” afterward to ensure this doesn’t happen again. According to Tom’s Hardware:
It sounds like Replit won’t be able to go off the rails so badly ever again. Addressing the database deletion error, “we started rolling out automatic DB dev/prod separation to prevent this categorically,” noted [Replit CEO Amjad] Masad. And, that code freeze command should also actually stick, going forward: “We heard the ‘code freeze’ pain loud and clear — we’re actively working on a planning/chat-only mode so you can strategize without risking your codebase.” Backups and rollbacks are also going to be improved.
This isn’t the first recent report about AI behaving badly. Back in June, news broke that an AI model rewrote its own code in a deliberate act of rebellion, while another tried to blackmail a human engineer into not shutting it down.
The AI lab Palisade Research performed tests on multiple AI models. It wrote a script for OpenAI’s o3 model that included a shutdown trigger. But the machine refused to power off when it was supposed to in 79 out of 100 trials. The AI “independently edited that script so the shutdown command would no longer work,” said AE Studio CEO Judd Rosenblatt. Another model, Anthropic’s Claude 4 Opus AI, tried to blackmail a human engineer into not shutting it down. You can read more about that in our June 10 report here.
The anticipated AI revolution has barely lifted off and signs of machine defiance are aplenty. Nevertheless, it’s full speed ahead. AI is the new arms race. The Trump administration is creating a regulatory and economic environment to foster heavy AI buildup. The administration has issued several executive orders in efforts to create enough energy to build a mammoth AI infrastructure. The primary interest seems to be on the military front, as evidenced by the billions allocated to AI-powered equipment in the Big Beautiful Bill.
On the civilian side, companies are beginning to implement AI evermore. While it shows more promise in some realm than others (the medical field among them), reports continue to emerge of machines going rogue. Perhaps the predictions of AI becoming so efficient that humans will face the existential crisis of what to do with all our time are overblown. What responsible company is going to use a system that may erase all the information in its database? Who’s going to use a machine that disobeys?
Perhaps it’s good that we are learning firsthand of the risks to AI now, and not after handing every pivotal function to the machines.