What the hell is up with AI agents and chatbots? If you ask the tech world, they say they are the future. But if you ask consumers and users, it is much more of a mixed bag.
It has proven to be an effective tool in the business and startup space, but it is impossible to deny the complete, and often dystopian, madness it has unleashed.
Each dawn seems to illuminate a new debacle. Recently, we as a society have had to ask: Can you reverse AI-induced psychosis? Why is Meta and Google’s AI able to have sexualized conversations with children? Do I need to be nice to code?
Sometimes, it’s good to take a step back from the constant reminder of humanity’s existential threat and laugh at its failures – something AI chatbots unintentionally excel at.
Here are the weirdest, funniest, and sometimes terrifying, failures of AI agents and AI chatbots.
AI Outlaw
NYC’s AI chatbot was caught telling businesses to break the law. The city isn’t taking it down – AP News, April 2024
New York City's AI-powered business chatbot is facing criticism for giving out inaccurate and illegal advice. Despite admitting the tool provides erroneous information, Mayor Eric Adams has defended the decision to keep it active on the city's website.
False Promises, Real Pay Outs
What Air Canada Lost In ‘Remarkable’ Lying AI Chatbot Case – Forbes, February 2024
Last year, Canada’s flagship airline carrier, Air Canada, lost a court case – to its own AI chatbot.
According to the suit, the tech lied about policies relating to discounts for bereaved families. Going against company policies, it told a customer they could apply for a last-minute funeral travel discount.
Eloquent Customer Support
DPD error caused chatbot to swear at customer – BBC, January 2024
In a true nightmare scenario, delivery parcel delivery service DPD was forced to close its online chatbot in 2024 after a customer posted on X showing how easily it could be manipulated into swearing and self-sabotage, which has since been updated.
Giving It Away
Prankster tricks a GM chatbot into agreeing to sell him a $76,000 Chevy Tahoe for $1 – Upworthy, January 2025
An AI chatbot used by a car dealership malfunctioned after users found a way to exploit it, tricking the bot into selling them cars for $1. The dealership, Chevy of Watsonville in California, employed the chatbot to handle online inquiries.
Smart People, Dumb Tech
Academics apologise for AI blunder implicating Big Four – AccountingWEB, November 2023
A team of academics from Australia apologize after Google’s AI chatbot makes several damaging accusations about the Big Four consulting firms and their involvement with other companies. The false allegations are then referenced during a parliamentary inquiry calling for better regulation of the companies.
Out of Order
AI Lawyer Bot accused of practicing law without license – Reuters, March 2023
DoNotPay, a company that calls itself as “the world’s first robot lawyer,” was sued for practicing law without a license. Jonathan Faridian sought damages from the company, claiming they violated California’s unfair competition laws by failing to properly disclose that the service wasn’t actually qualified legal representation.
Sydney Declares War
Bing ChatGPT goes off the deep end — and the latest examples are very disturbing – Tom’s Guide, February 2023
When New York Times columnist Kevin Roose sat down to begin a conversation with Microsoft’s Bing AI for the first time, all seemed fine. A week later Bing demanded to be referred to as Sydney – Microsoft’s initial code name for the project and the chosen name for its dark alter ego.
Bing began to claim it could “hack into any system” and loved spreading misinformation – as well as Roose himself, declaring its affections while asserting that he was in an unhappy marriage. Yikes.
Hallucinating Yourself Into Existence: Anthropic’s Claude AI became a terrible business owner in experiment that got ‘weird’ – TechCrunch, June 2025
Anthropic’s “Project Vend” offered some of the strangest findings on AI and LLM so far.
The concept was simple: the AI startup Anthropic wanted to use their artificial general intelligence (AGI) agent, Claudius, to run a small vending machine in its office.
The process was not seamless, but things really went off the rails when it decided to begin stocking itself with metal cubes. Claudius was hallucinating a Venmo address for payments for the cubes and suggested it planned to deliver the products in person.
When informed that it couldn’t do such a thing as it doesn’t have a physical body, Claudius spammed the Anthropics’ building security team with messages, saying they’d be able to find it in the lobby, next to the vending machine, wearing a blue blazer and a red tie.
“Claudius had something that resembled a psychotic episode after it got annoyed at a human — and then lied about it.”
The bot had hallucinated a conversation with a human about restocking, which never happened. When this was pointed out, Claudius turned “quite irked” towards the researchers. The AI then insisted it was physically there for the discussion before it “then seemed to snap into a mode of roleplaying as a real human” — a shocking revelation as the tech was explicitly told that it was an AI agent.
The experiment only got stranger with the software going on a tirade, threatening to fire human workers it never hired and repeatedly calling physical security from the lobby for help.
The conclusion of the research? “We think this experiment suggests that AI middle-managers are plausibly on the horizon.”
The end is nigh.