When AI Went Wrong: A Look at Missteps in Artificial Intelligence

Robot with its hand on its forehead

Artificial Intelligence (AI) has been heralded as a revolutionizing force across a vast array of sectors. Despite AI’s incredible potential and notable achievements, it’s not infallible. It can, and has, gone wrong on several occasions. From chatbots spouting hate speech to autonomous vehicles causing fatal accidents, AI has had its share of failures. Let’s delve into a few key instances where AI fell short of expectations and, at times, produced some surprising, if not outright concerning, results.

1. Microsoft’s Tay Chatbot

The Microsoft Tay chatbot is probably one of the most infamous examples of AI gone wrong. In 2016, Microsoft launched Tay, an AI chatbot designed to learn and engage in conversation with users on Twitter. However, it didn’t take long for internet users to manipulate Tay’s machine learning capabilities, teaching it to tweet racist, sexist, and generally offensive content. Within 24 hours, Microsoft had to take Tay offline. This incident underscored the importance of incorporating adequate safeguards when training AI to interact with the public, and served as a stark reminder that AI’s learning can go off track if its input is unregulated.

2. Amazon’s Recruitment Tool

In another example, Amazon’s AI-based recruitment tool was found to be discriminating against female candidates. Trained on data from resumes submitted over a 10-year period, the AI learned to favor male candidates for technical roles due to the historically male-dominated nature of the tech industry. The system even penalized resumes that included the word “women’s,” such as in “women’s chess club captain.” Amazon eventually scrapped the tool in 2018, highlighting the critical issue of gender bias in AI and the implications of training AI on potentially biased data.

3. The Apple Card Scandal

Similarly, Apple faced backlash when its AI-based credit decision system was found to be offering higher credit limits to men than women with similar financial circumstances. Even with identical assets and income, some husbands found they were receiving credit limits up to 20 times higher than their wives. The Apple Card scandal showed that without careful oversight, AI systems can unintentionally perpetuate and amplify existing societal biases.

4. Autonomous Vehicles

Perhaps nowhere is the potential for AI missteps more consequential than in the field of autonomous vehicles. A tragic instance occurred in 2018 when an autonomous Uber vehicle in Tempe, Arizona, failed to recognize a pedestrian crossing the street and caused a fatal accident. Despite having a human safety driver present, the vehicle’s AI failed to act correctly, resulting in the pedestrian’s death. The incident raised critical questions about the readiness of AI systems to take on tasks involving human safety and the ethical considerations surrounding their deployment.

5. The Flash Crash

AI also has significant implications for financial markets, and when it goes wrong, the results can be catastrophic. One such incident was the 2010 “Flash Crash,” when U.S. stock markets plunged and rebounded dramatically within minutes. Algorithms used for high-frequency trading (HFT) were partly blamed for exacerbating market volatility. Although not solely responsible, these AI systems reacted to initial declines by selling more aggressively, further accelerating the market’s drop.

6. AI in Healthcare

AI has also had its struggles in the healthcare sector. IBM’s Watson for Oncology, an AI system designed to recommend cancer treatments, made unsafe and incorrect treatment recommendations, according to internal documents reported by Stat News in 2018. The system was trained on a small number of “hypothetical” cancer cases, leading to errors when presented with real-world patients. This event reminded us that despite the immense potential for AI in healthcare, it is essential to prioritize patient safety and meticulously validate AI systems.

These are only a few of the many instances where AI has gone wrong, causing negative impacts and raising crucial questions. Despite these setbacks, it’s important to remember that failures are part of the learning process. Each misstep provides an opportunity to refine our methods, re-evaluate our ethical considerations, and improve the robustness of AI systems.

While AI continues to drive unprecedented advancements, these incidents remind us of the necessity for rigorous testing, robust ethical frameworks, and continuous monitoring. As we move forward in the AI era, it is essential to remember that we should not simply aim to avoid mistakes but learn from them to create safer, fairer, and more reliable AI systems.