AI chatbots like OpenAI's ChatGPT, Microsoft Corp.'s (MSFT  ) Copilot and others can sometimes generate responses or output that is nonsensical. This is known as hallucination. While it does happen sometimes, not everyone knows the reason behind it.

What Happened: Meta Platforms Inc.'s (META  ) chief AI scientist and "Godfather of AI" Yann LeCun has explained exactly why AI chatbots hallucinate, in an episode of the Lex Fridman podcast.

Hallucination in AI refers to the phenomenon where large language models (LLMs) generate responses or outputs that are nonsensical, irrelevant, or disconnected from the input or context.

But why does it happen? Why do chatbots like ChatGPT and Copilot lose it?

"Because of the autoregressive prediction, every time an AI produces a token or a word, there is some level of probability for that word to take you out of the set of reasonable answers," explains LeCun, picturing how chatbots derail the conversation.

As the conversation continues, the probability of chatbots returning a nonsensical answer increases exponentially, according to LeCun.

"It's like errors accumulate. So, the probability that an answer would be nonsensical increases exponentially with the number of tokens."

LeCun explains that large language models don't always account for all the questions a user might ask, and when a user asks a question that is outside the scope of what the model has been trained on, it starts hallucinating and spits nonsense.

Why It Matters: AI chatbots have the tendency to hallucinate and can sometimes ask users to "worship" them, as was the case with Microsoft's Copilot recently.

Earlier when Microsoft adopted OpenAI's ChatGPT technology and launched the Bing AI chatbot, users quickly discovered it had multiple other personas called "Sydney," "Fury," "Venom," among others.

Now we know why chatbots behave in a crazy fashion sometimes.