Cognitive Fallibility in Human Intelligence (and in AI)

https://cdn2.psychologytoday.com/assets/styles/manual_crop_1_91_1_1528x800/public/teaser_image/blog_entry/2025-06/AI2.png.jpg?itok=zqqfA8i3
AI2.png

“It is not my aim to surprise or shock you—but the simplest way I can summarize is to say that there are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until—in a visible future—the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”

This quote is an accurate reflection of artificial intelligence (AI) these days. If you follow the latest trends, you know AI will have, or will soon approximate, human intelligence. AI has become so impressive that it has been awarded its own determiner: “the artificial intelligence.”

No, it may even surpass human intelligence! Artificial general intelligence (AGI), also called human‑level intelligence AI, is argued to match or surpass human capabilities across virtually all cognitive tasks. And that is something to think about hard for us, thinking animals.

These days, it is easy to be impressed, concerned, or even frightened about the developments in AI. One development rapidly follows another. But it is also important, and increasingly so, to be critical thinkers. Given the tight relationship between psychology and AI, there is a special role here for psychologists. It is psychologists who often understand the history, the definition, and the issues that come with concepts that find their origins in psychology—concepts such as “intelligence.”

We might be concerned about AI and AGI, but, ironically, what it means to match or surpass human intelligence is far less clear. The confusion extends well beyond the name itself. Take, for instance, large language models (LLMs). LLMs are the computers responsible for chatbots such as ChatGPT, Claude, Gemini, Perplexity, Llama, DeepSeek, and Mistral. When the chatbots debuted, the world was impressed by their reasoning skills, their language fluency, and their creativity to rhyme and create limericks on the fly. They could do what very few humans were able to do and do it at least much, much faster.

But, soon, criticism of the models started. Anecdotes emerged about how they sometimes made things up. They hallucinated and confabulated information. They generated information that was simply not true. They made mistakes!

We all know humans make mistakes all the time. We often do not think logically, as a common example from an Introduction to Psychology course shows. Take the sentence, “Linda is smart and deeply concerned with social justice.” Is Linda more likely to be a bank teller, or a bank teller active in the feminist movement? You do not need to have a degree in statistics to know that the second option is far less likely than the first. But many of us err because we jump to incorrect conclusions. However, there are simply more bank tellers in the world than bank tellers active in a feminist movement.

We even make mistakes when it comes to simply counting the number of times the letter “f” is presented in a sentence. Take the following sentence: “The future of AI depends on the fusion of fields, many of which are full of unforeseen challenges.” Chances are you did not quite count the number correctly.

We fallible humans, hasten to blame ChatGPT-like systems for making mistakes, for hallucinations, and confabulations of information. But the very fact that they do make mistakes shows that the systems operate in a human-like fashion. These artificial minds make mistakes, just like the human versions. It would be unfair to blame an AI system that finally nicely demonstrates human cognitive fallacies for its inaccuracies; any AI system that is 100 percent accurate simply cannot be human-like.

Let me quickly return to the number of times the letter “f” was included. Nine times to be precise. You may have missed three.

If human intelligence is defined by its linguistic and mathematical skills, we might have to agree that AI matches those skills. The average calculator and average LLM outperform humans. It would be unfair to argue that their mechanisms are not human-like. For one thing, for psychologists, the mechanisms of human linguistic and mathematical skills are not entirely clear, but, more important, the definition of intelligence generally does not include the mechanisms behind it. And if we argue that these AI systems excel only at linguistic and mathematical skills, well, that’s what intelligence has always been defined as.

If human intelligence these days is not defined solely by linguistic and mathematical skills, but also includes—or can include—emotional intelligence, bodily kinesthetic intelligence, visual-spatial intelligence, what does intelligence entail? This is particularly relevant given that the argument that intelligence is what intelligence tests measure does not hold (and is circular).

Intelligence Essential Reads

The question of whether AI matches human intelligence, or even surpasses it, can be answered only if we have defined intelligence. The process of answering that question nicely demonstrates our cognitive fallibility.

Yes, I failed to supply a reference for the quote that opened this blog post. The words were uttered by psychologist, mathematician, economist, and Nobel prize-winner Herb Simon—in 1957, nearly seven decades ago.

This post was originally published on this site