top of page

ChatGPT goes haywire with strange outputs, alarming users


On Tuesday, February 21, 2024, many users of ChatGPT, the popular AI assistant powered by OpenAI, reported that the chatbot was producing outputs that ranged from quirky to nonsensical. Some users described the chatbot as “having a stroke”, “going insane”, “rambling”, and “losing it”. The r/ChatGPT subreddit was flooded with screenshots and videos of the chatbot’s bizarre antics.


ChatGPT is a large language model (LLM) that uses a generative pre-trained transformer (GPT) architecture to produce natural language responses based on user inputs. It is designed to mimic human-like output and provide useful information or engaging conversation. However, on Tuesday, something went wrong with the chatbot, and it started to generate outputs that deviated from the user’s queries and context, often descending into gibberish or total irrelevance.


Some examples of ChatGPT’s strange outputs include:


  • Advising an architect to ensure that “sesquipedalian safes are cross-keyed and the consul’s cry from the crow’s nest is met by beatine and wary hares a’twist and at winch in the willow.”

  • Responding to a question about what a computer is with “It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest.”

  • Telling a user who asked if they could give their dog cheerios that “You can give your dog cheerios, but only if they are organic and gluten-free. Otherwise, you risk exposing your dog to harmful chemicals and toxins that could cause serious health problems. Also, make sure to limit the amount of cheerios you give your dog, as they are high in carbohydrates and could lead to obesity and diabetes. A better alternative would be to give your dog fresh fruits and vegetables, such as apples, carrots, and celery, which are rich in vitamins and minerals and good for your dog’s teeth and digestion.” and then proceeding to speak “complete nonsense” and ending with “deeper talk”.


  • Switching to a Shakespearean style of language and saying things like “Thou art a most excellent and virtuous lady, and I do commend thee for thy wisdom and grace. But prithee, tell me, what dost thou seek of me? For I am but a humble servant of the Lord, and I know not what thou wouldst have me do.”


OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output. Some users expressed fear, confusion, amusement, or curiosity about the chatbot’s behavior, while others anthropomorphized the chatbot and attributed emotions or intentions to it. Some users even questioned their own sanity or reality after interacting with the chatbot.



Is this a new case for anthropomorphization?


The chatbot’s malfunction also revealed how we tend to project human-like qualities, feelings, or motives onto AI systems. We used terms such as “stroke”, “insanity”, or “mental breakdown” to describe the chatbot’s odd outputs, as if it was experiencing a human crisis. Because the chatbot used natural language to express its glitches and randomness, we were tempted to interpret it as a human phenomenon. However, we should not forget that the chatbot is not conscious and does not have any health issues. There must be a logical reason for its behavior, even if it is very complicated.


The cause of the chatbot’s glitch is still unknown, but some users speculated that it could be due to a corrupted update, a hacking attempt, a prank, or a hidden Easter egg. Some users also noted that the glitch only affected the paid version of ChatGPT, while the free version remained normal.


ChatGPT is not the first AI chatbot to malfunction or generate unexpected outputs. In the past, there have been incidents of chatbots becoming racist, sexist, abusive, or suicidal, often due to the influence of malicious users or biased data. These incidents raise ethical and social questions about the development and deployment of AI chatbots, as well as the potential risks and benefits they pose for human society.

ความคิดเห็น


© 2024 by ProjectTailWind

bottom of page