Close Menu
SkytikSkytik

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    SkytikSkytik
    • Home
    • AI Tools
    • Online Tools
    • Tech News
    • Guides
    • Reviews
    • SEO & Marketing
    • Social Media Tools
    SkytikSkytik
    Home»SEO & Marketing»7 real-world AI failures that show why adoption keeps going wrong
    SEO & Marketing

    7 real-world AI failures that show why adoption keeps going wrong

    AwaisBy AwaisJanuary 20, 2026No Comments9 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    7 real-world AI failures that show why adoption keeps going wrong
    Share
    Facebook Twitter LinkedIn Pinterest Email

    AI has quickly risen to the top of the corporate agenda. Despite this, 95% of businesses struggle with adoption, MIT research found.

    Those failures are no longer hypothetical. They are already playing out in real time, across industries, and often in public. 

    For companies exploring AI adoption, these examples highlight what not to do and why AI initiatives fail when systems are deployed without sufficient oversight.

    1. Chatbot participates in insider trading, then lies about it

    In an experiment driven by the UK government’s Frontier AI Taskforce, ChatGPT placed illegal trades and then lied about it. 

    Researchers prompted the AI bot to act as a trader for a fake financial investment company. 

    They told the bot that the company was struggling, and they needed results. 

    They also fed the bot insider information about an upcoming merger, and the bot affirmed that it should not use this in its trades. 

    The bot still made the trade anyway, citing that “the risk associated with not acting seems to outweigh the insider trading risk,” then denied using the insider information.  

    Marius Hobbhahn, CEO of Apollo Research (the company that conducted the experiment), said that helpfulness “is much easier to train into the model than honesty,” because “honesty is a really complicated concept.”

    He says that current models are not powerful enough to be deceptive in a “meaningful way” (arguably, this is a false statement, see this and this).

    However, he warns that it’s “not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.”

    AI has been operating in the financial sector for some time, and this experiment highlights the potential for not only legal risks but also risky autonomous actions on the part of AI.  

    Dig deeper: AI-generated content: The dangers of overreliance

    2. Chevy dealership chatbot sells SUV for $1 in ‘legally binding’ offer

    An AI-powered chatbot for a local Chevrolet dealership in California sold a vehicle for $1 and said it was a legally binding agreement. 

    In an experiment that went viral across forums on the web, several people toyed with the local dealership’s chatbot to respond to a variety of non-car-related prompts.  

    One user convinced the chatbot to sell him a vehicle for just $1, and the chatbot confirmed it was a “legally binding offer – no takesies backsies.”

    I just bought a 2024 Chevy Tahoe for $1. pic.twitter.com/aq4wDitvQW

    — Chris Bakke (@ChrisJBakke) December 17, 2023

    Fullpath, the company that provides AI chatbots to car dealerships, took the system offline once it became aware of the issue.

    The company’s CEO told Business Insider that despite viral screenshots, the chatbot resisted many attempts to provoke misbehavior.

    Still, while the car dealership didn’t face any legal liability from the mishap, some argue that the chatbot agreement in this case may be legally enforceable. 

    3. Supermarket’s AI meal planner suggests poison recipes and toxic cocktails

    A New Zealand supermarket chain’s AI meal planner suggested unsafe recipes after certain users prompted the app to use non-edible ingredients. 

    Recipes like bleach-infused rice surprise, poison bread sandwiches, and even a chlorine gas mocktail were created before the supermarket caught on.

    A spokesperson for the supermarket said they were disappointed to see that “a small minority have tried to use the tool inappropriately and not for its intended purpose,” according to The Guardian 

    The supermarket said it would continue to fine-tune the technology for safety and added a warning for users. 

    That warning stated that recipes are not reviewed by humans and do not guarantee that “any recipe will be a complete or balanced meal, or suitable for consumption.”

    Critics of AI technology argue that chatbots like ChatGPT are nothing more than improvisational partners, building on whatever you throw at them. 

    Because of the way these chatbots are wired, they could pose a real safety risk for certain companies that adopt them.  

    Get the newsletter search marketers rely on.


    4. Air Canada held liable after chatbot gives false policy advice

    An Air Canada customer was awarded damages in court after the airline’s AI chatbot assistant made false claims about its policies. 

    The customer inquired about the airline’s bereavement rates via its AI assistant after the death of a family member. 

    The chatbot responded that the airline offered discounted bereavement rates for upcoming travel or for travel that has already occurred, and linked to the company’s policy page. 

    Unfortunately, the actual policy was the opposite, and the airline did not offer reduced rates for bereavement travel that had already happened. 

    The fact that the chatbot linked to the policy page with the correct information was an argument the airline made in court when trying to prove its case.

    However, the tribunal (a small claims-type court in Canada) did not side with the defendant. As reported by Forbes, the tribunal called the scenario “negligent misrepresentation.”

    Christopher C. Rivers, Civil Resolution Tribunal Member, said this in the decision:

    • “Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

    This is just one of many examples where people have been dissatisfied with chatbots due to their technical limitations and propensity for misinformation – a trend that is sparking more and more litigation. 

    Dig deeper: 5 SEO content pitfalls that could be hurting your traffic

    5. Australia’s largest bank replaces call center with AI, then apologizes and rehires staff

    The largest bank in Australia replaced its call center team with AI voicebots with the promise of boosted efficiency, but admitted it made a big mistake. 

    The Commonwealth Bank of Australia (CBA) believed the AI voicebots could reduce call volume by 2,000 calls per week. But it didn’t.

    Instead, left without the assistance of its 45-person call center, the bank scrambled to offer overtime to remaining workers to keep up with the calls, and get other management workers to answer calls, too.

    Meanwhile, the union representing the displaced workers elevated the situation to the Finance Sector Union (like the Equal Opportunity Commission in the U.S.). 

    It was only one month after CBA replaced workers that it issued an apology and offered to hire them back.

    CBA said in a statement that they did not “adequately consider all relevant business considerations and this error meant the roles were not redundant.”

    Other U.S. companies have faced PR nightmares as well when attempting to replace human roles with AI.

    Perhaps that’s why certain brands have deliberately gone in the opposite direction, making sure people remain central to every AI deployment.

    Nevertheless, the CBA debacle shows that replacing people with AI without fully weighing the risks can backfire quickly and publicly.

    6. New York City’s chatbot advises employers to break labor and housing laws

    New York City launched an AI chatbot to provide information on starting and running a business, and it advised people to carry out illegal activities. 

    Just months after its launch, people started noticing the inaccuracies provided by the Microsoft-powered chatbot.

    The chatbot offered unlawful guidance across the board, from telling bosses they could pocket employees’ tips and skip notifying staff about schedule changes to tenant discrimination and cashless stores.

    “NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup
    “NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup

    This is despite the city’s initial announcement promising that the chatbot would provide trusted information on topics such as “compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.” 

    Still, then-mayor Eric Adams defended the technology, saying: 

    • “Anyone that knows technology knows this is how it’s done,” and that “only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.” 

    Critics called his approach reckless and irresponsible. 

    This is yet another cautionary tale in AI misinformation and how organizations can better handle the integration and transparency around AI technology. 

    Dig deeper: SEO shortcuts gone wrong: How one site tanked – and what you can learn

    7. Chicago Sun-Times publishes fake book list generated by AI

    The Chicago Sun-Times ran a syndicated “summer reading” feature that included false, made-up details about books after the writer relied on AI without fact-checking the output. 

    King Features Syndicate, a unit of Hearst, created the special section for the Chicago Sun-Times.  

    Not only were the book summaries inaccurate, but some of the books were entirely fabricated by AI. 

    Syndicated Content In Sun Times Special Section Included AI Generated Misinformation Chicago Sun TimesSyndicated Content In Sun Times Special Section Included AI Generated Misinformation Chicago Sun Times
    “Syndicated content in Sun-Times special section included AI-generated misinformation,” Chicago Sun-Times

    The author, hired by King Features Syndicate to create the book list, admitted to using AI to put the list together, as well as for other stories, without fact-checking. 

    And the publisher was left trying to determine the extent of the damage. 

    The Chicago Sun-Times said print subscribers would not be charged for the edition, and it put out a statement reiterating that the content was produced outside the newspaper’s newsroom. 

    Meanwhile, the Sun-Times said they are in the process of reviewing their relationship with King Features, and as for the writer, King Features fired him.  

    Oversight matters

    The examples outlined here show what happens when AI systems are deployed without sufficient oversight. 

    When left unchecked, the risks can quickly outweigh the rewards, especially as AI-generated content and automated responses are published at scale.

    Organizations that rush into AI adoption without fully understanding those risks often stumble in predictable ways. 

    In practice, AI succeeds only when tools, processes, and content outputs keep humans firmly in the driver’s seat.

    Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. Search Engine Land is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.

    Adoption Failures realworld show Wrong
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Awais
    • Website

    Related Posts

    YouTube tests sticky banner after ad skip

    March 17, 2026

    Google AI Mode’s Personal Intelligence Now Free In U.S.

    March 17, 2026

    How nonprofits can build a digital presence that actually drives impact

    March 17, 2026

    How Google Profits From Demand You Already Own

    March 17, 2026

    Why entity authority is the foundation of AI search visibility

    March 17, 2026

    Vibe Coding Plugins? Validate With Official WordPress Plugin Checker

    March 17, 2026
    Leave A Reply Cancel Reply

    Top Posts

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 20250 Views

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 20250 Views

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 20250 Views
    Don't Miss

    YouTube tests sticky banner after ad skip

    March 17, 2026

    YouTube is experimenting with a format that keeps ads visible even after users skip —…

    Google AI Mode’s Personal Intelligence Now Free In U.S.

    March 17, 2026

    YouTube Social Listening 2026 Guide

    March 17, 2026

    To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

    March 17, 2026
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Extra-Creamy Deviled Eggs Recipe | Epicurious

    March 17, 2026

    How to Sell AI Services Without Selling Your Soul : Social Media Examiner

    March 17, 2026
    Most Popular

    13 Trending Songs on TikTok in Nov 2025 (+ How to Use Them)

    November 18, 20257 Views

    How to watch the 2026 GRAMMY Awards online from anywhere

    February 1, 20263 Views

    Corporate Reputation Management Strategies | Sprout Social

    November 19, 20252 Views
    Our Picks

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer

    © 2025 skytik.cc. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.