As tools and applications powered by Artificial Intelligence (AI) become increasingly integrated into our daily lives, it’s important to remember that models can sometimes generate incorrect information.
This phenomenon, known as ‘hallucination’, is described by IBM: it occurs when a language model (LLM, the acronym for Large Language Models, the database that allows AI to talk to us), in a generative AI chatbot or other computer vision tool, detects patterns or objects that don’t exist or are imperceptible to humans, leading to inaccurate or meaningless results.
The hallucination rate is the frequency with which an LLM generates false or unsupported information in its database. The figures come from Vectara and are up to date until 11 December 2024. The hallucination rates were calculated by summarising a thousand short documents from each LLM.
Which AI Models Have the Lowest Hallucination Rates?
We present the top 15 AI models with the lowest delusion rates, along with their respective companies and countries of origin.
