ChatGPT and Gemini logos next to a humanoid robot in a thoughtful pose against a blue background
ARGENTINA

OpenAI explains why Artificial Intelligence invents data in its responses

OpenAI explained why ChatGPT and other AI models 'hallucinate' with false data and how they work to reduce errors

It is already common for a model like ChatGPT to invent information or provide incorrect answers. Users from different platforms share examples of these "hallucinations" on social media every day.

The phenomenon generates distrust and poses concrete risks. According to OpenAI on its official website, the problem arises because training systems reward guessing instead of recognizing uncertainty.

Brilliant digital brain with circuits and blue lights on a futuristic technological background
The phenomenon generates distrust and poses concrete risks | La Derecha Diario

Why AI hallucinates with invented data

Language models work as if they were facing a multiple-choice test. If they do not answer, they get zero points. That's why they choose to "take a risk" and give an answer, even if they are not certain.

This explains why they rarely say "I don't know". The training strategy pushes them to guess rather than admit ignorance, which increases the chances of incorrect answers.

Examples of frequent mistakes

OpenAI cites cases such as that of researcher Adam Tauman Kalai. When the AI was asked about the title of his doctoral thesis, it returned three different answers, all of them wrong.

An autonomous artificial intelligence with lights and cables connected to its head.
Language models work as if they were taking a multiple-choice test | La Derecha Diario

The same thing happened when it was asked about his birthday: it caused three different dates, none of them correct. According to the company, this shows how the model bets randomly in the hope of getting it right.

What OpenAI concludes about hallucinations

The company pointed out that there are several myths about this problem:

  • Accuracy will never be 100%: there are questions that are impossible to answer exactly.
  • They are not inevitable: models can refrain from answering if they are not certain.
  • They do not depend only on large models: even small ones can learn to recognize their limits.
  • They are not a glitch: they arise from the statistical mechanisms used in training.
A pensive humanoid robot in front of a whiteboard full of complex mathematical formulas represents the dangers of Artificial Intelligence according to former Google CEO Eric Schmidt.
What does OpenAI conclude about hallucinations? | La Derecha Diario

The challenge of measuring and avoiding mistakes

According to OpenAI, there are already tests designed to evaluate hallucinations. However, most systems prioritize statistical accuracy over the "humility" of admitting ignorance.

➡️ Argentina

More posts: