CHATBOT GAFFES THAT EXPOSE THE CHINKS IN THE AI ARMOUR

  • That OpenAI’s ChatGPT has received a lot of attention is well known by now. Recently, two journalists from the United States polling company, FiveThirtyEight, asked the artificial intelligence chatbot to create an 800-word piece regarding public perception of AI chatbots.
  • “A 2021 survey by the Pew Research Center,” the chatbot wrote in the article, “found that 71% of Americans believe it is generally a good thing for society if robots and computers become more capable and sophisticated, while only 27% believe this would be a bad thing.”
  • The 2021 Pew survey that ChatGPT was citing, however, was not found by the FiveThirtyEight journalists. When questioned, Pew’s media team had the same problem. With regard to the growing use of artificial intelligence in daily life
  • the FiveThirtyEight team, however, discovered a 2021 Pew survey that came to the opposite conclusion. Only 18% of respondents said they were more excited than concerned, 37% said they were more concerned than excited, and 45% said they were both equally concerned and excited.
  • People became aware that the chatbot Microsoft introduced to its Bing search engine was disseminating a variety of false information about the Gap, Mexican nightlife, the musician, Billie Eilish, and numerous other topics.
  • As a result of the chatbot mania, Google had to introduce “Bard”. Alphabet’s shares plummeted by more than $100 billion after Bard gave an “incorrect” answer in a demonstration.
  • In 2016, Microsoft apologised after a Twitter chatbot, Tay, began generating racist and sexist messages. Meta’s BlenderBot was telling journalists it had deleted its Facebook account after learning about the company’s privacy scandals. There are other examples too.

The problem with bot logic

  • In reality, AI models are based on vast amounts of digital text that are extracted from the Internet.
  • This content contains a significant quantity of untruthful, biased and toxic materials that may be a bit outdated and that are subsequently inherited by AI models.
  • These technologies do not directly copy text from the Internet when they generate it. And, importantly, they do not have any human-like concept of “true” or “false”. Yet, incorrect input may not be the only reason for such AI-generated untruths.
  • “Even if they learned solely from text that was true,” Cade Metz, a technology correspondent, wrote in a recent article in The New York Times, “they might still produce untruths.”

SOURCE: THE HINDU, THE ECONOMIC TIMES, PIB

About ChinmayaIAS Academy - Current Affairs

Check Also

GSAT

GSAT-20 GSAT-20, also known as CMS-03 or GSAT-N2, is a high-throughput communication satellite jointly developed …

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Free Updates to Crack the Exam!
Subscribe to our Newsletter for free daily updates