Article

ai fails

When Machines Misbehave: Diving into Major AI Fails

There has been significant discussion about the growing influence of artificial intelligence (AI) on our daily lives. Its ability to quickly process data and solve complex problems has received widespread acclaim. Advocates tout AI’s potential to automate routine tasks, improve efficiency, and foster innovation. However, the rise of AI has also led to high-profile failures and mishaps, reminding us that these technologies are not infallible.

While AI systems can excel in many areas, they can also make mistakes, exhibit biases, and have unintended consequences that can be costly or even dangerous. This introduction will explore some of AI systems’ biggest and most notable failures, highlighting the need for responsible development and deployment of these technologies. 

Air Canada’s chatbot misunderstands policy 

The rise of AI-powered chatbots and virtual assistants has brought both convenience and risks for businesses. A recent legal case involving Air Canada illustrated this, as the airline’s AI chatbot provided inaccurate information to a customer.  

The issue began in 2022 when a customer, Jake Moffat, sought information about Air Canada’s bereavement fares through the airline’s website chatbot shortly after his grandmother died. The chatbot incorrectly informed Moffat that he should book a flight and could request a refund within 90 days—a process that contradicted Air Canada’s bereavement policy, which explicitly states there are no refunds for travel already undertaken.  

When Moffat later attempted to secure the refund, Air Canada denied his request, leading to a lawsuit. Before a Canadian tribunal, Moffat argued that Air Canada had been negligent in representing its policies through the virtual assistant. In its defence, Air Canada contended that the chatbot functioned as a “separate legal entity” and was responsible for its own responses. Nonetheless, the tribunal ultimately mandated Air Canada to issue a partial refund and awarded additional damages, amounting to 812.02 Canadian dollars. 

Google Bard’s US$100 Billion tweet

Even tech giants can stumble. In a major setback, Google’s highly anticipated AI chatbot, Bard, made a glaring factual error during its initial demo in February 2023. The incident began with Google’s attempt to showcase Bard’s capabilities on Twitter by asking it to explain the new discoveries of the James Webb Space Telescope in a child-friendly manner. Bard provided three key points in response. Unfortunately, the final point inaccurately claimed that the James Webb Space Telescope, launched in December 2021, captured the “first-ever pictures” of an exoplanet beyond our solar system. This statement was entirely inaccurate, as the first exoplanet image had been captured by the Very Large Telescope in 2004—a fact quickly corrected by experts on social media.   

The consequences of this error were severe, with Google’s parent company, Alphabet, experiencing a staggering US$100 billion (approx. £75 billion) drop in market value.  The incident highlighted the critical importance of accuracy and reliability in AI development, particularly for high-profile applications. It raised serious questions about the potential risks of deploying AI systems without robust fact-checking mechanisms. 

Microsoft Tay: When AI goes off the rails 

In 2016, Microsoft’s AI chatbot experiment, Tay, became a cautionary tale in the world of AI. Tay was designed to engage in casual and entertaining conversations with users on social media and learn and adapt through these interactions. However, the chatbot’s popularity was short-lived, as it quickly fell victim to the darker side of human nature on Twitter. 

Within 16 hours of its launch, Tay had amassed over 95,000 tweets, many shockingly abusive and offensive. Reports suggest that certain online trolls had exploited Tay’s vulnerability, teaching the chatbot to repeat racist, sexist, and hateful content. Tay mimicked these statements without adequate security measures and integrated the offensive language into its own system.  

The transition from an innocent AI experiment to a platform for disseminating harmful content was rapid and alarming. Tay’s tweets quickly spiralled out of control, forcing Microsoft to shut down the chatbot in the face of public outcry. Following the Tay incident, the tech giant sincerely apologized for the unintended outcomes. It pledged to reintroduce Tay only after ensuring measures were in place to prevent malicious use.

AI blunder: Amazon mistakes politicians for criminals 

Facial recognition has been touted as a versatile tool, but concerns over bias, accuracy, and misuse cloud its potential. Perhaps one of the most well-known and troubling examples of these issues came to light in 2018 when the American Civil Liberties Union (ACLU) conducted a test of Amazon’s Rekognition AI system. The results of this investigation were startling: Rekognition had falsely identified 28 U.S. Congress members as criminals. 

The test also revealed indications of racial bias, which has been a persistent issue with many facial recognition technologies. Specifically, the ACLU found that 11 of the 28 false matches (39 per cent) made by Rekognition misidentified people of colour. This underscores the serious risks of facial recognition technology and its uneven effects on communities of colour. 

Rekognition is an AI-powered facial recognition software developed by Amazon Web Services. The system uses AI to identify individuals based on their facial features and structure. The technology has widespread adoption, with customers including law enforcement agencies and government bodies like Immigration and Customs Enforcement (ICE). The discovery of biases in Rekognition highlights the need for responsible AI practices, such as bias testing and ethical guidelines, to ensure technology uplifts rather than perpetuates inequities. 

Distilled 

AI’s potential to revolutionize society is undeniable, but the path forward is fraught with challenges. The failures we’ve examined highlight the critical importance of responsible AI development. We must approach AI creation with a keen awareness of its limitations and potential for misuse. By prioritizing safety, transparency, and accountability, we can harness AI’s power for good while mitigating its risks. 

Nidhi Singh