Fantastic artificial average intelligence!

Column

IIlustration with two human female profiles and three robotlike profiles

Do we really want to be surrounded by all-knowing miracle brains, senior lecturer Lars Oestreicher wonders. AI illustration: Lars Oestreicher

Perhaps we should be grateful that our AI systems haven’t quite got to any higher level of intelligence yet, beyond some sort of average intelligence that can actually get things wrong at times, writes Lars Oestreicher, senior lecturer at the Division of Human Machine Interaction at the Department of information Technology.

Lars Oestreicher, senior lecturer at the Department of Information Technology. Photo: Private

Artificial intelligence: we all know what that is, right? Or do we really know what it is? Actually, the question is whether there is even something that can be called AI (without a modifying adjective of some kind). Most of what is now called AI is what used to be termed “weak” AI, i.e. advanced methods inspired by HI, or human intelligence. What most people today perhaps understand as artificial intelligence is what was formerly called “strong” AI: systems that exhibit actual intelligent behaviour. Fortunately, we don’t actually have the latter today. But what we do have is many systems that we can classify as weak AI, such as deep learning. It’s used, for example, to interpret X-ray and microscope images in healthcare. Then there are generative AI systems that can create images, videos and even music based on text descriptions, and of course we have large language models (LLM) which are used in chatbots.

Despite their name, these “weak” AI applications are very impressive within their respective areas of application. But they are often not as smart as we would generally like to believe. Of course, there have been many teething problems in the infancy of this development, such as AI-generated images of people that have six, seven or even eight (!) fingers on their hands. Although it is somewhat inhuman not to react to this, we actually have some artists who have done much stranger things in their paintings, such as Salvador Dali. So we might well call it intelligent, or even creative.

But personally, I don’t know if I would go that far. And that also applies to our new replacements for Google, namely ChatGPT, CoPilot, Claude, DeepSeek and all other similar systems. Sure, they can give good answers to many questions, but there are still simple things that they fail miserably at. I asked an earlier version of ChatGPT about my most interesting, published research articles. And I really did get the most interesting articles! Or, they would have been, had it really been me who had written them. But these articles did not actually exist, even though I got complete and properly formatted references, with journal name, volume, number and even page number (!) back from the AI. Now apparently this has been corrected in later versions of ChatGPT, but the basic problem is the same: What do these systems actually do when they “think”?

The AI systems we have today are very much based on detecting patterns, for example in images, that they can then generalise. To tell exactly what distinguishes a cat from a dog is quite difficult if you think about it. But an AI-based system can do it very well after training on a large number of images of cats and dogs. But is this intelligent? The fact that ChatGPT was unable to give me real publications is an example of the kinds of problems that can arise. A reference must have a certain format, a pattern, and an LLM can learn that. But of course, I have not published a sufficient number of articles such that together they would form a sufficiently strong semantic pattern. Which is why ChatGPT went ahead and filled in some details that fitted into the stronger general pattern of academic journal article references, but which otherwise had no connection whatsoever to reality.

Since these systems are supposed to be “intelligent”, one must of course explain the problems using anthropomorphic metaphors. So they don’t make mistakes – instead the poor things hallucinate. Of course, they are not really hallucinating. But maybe, just maybe, is it still the case that our AI systems haven’t quite got to any higher level of intelligence yet beyond some sort of average intelligence that can actually get things wrong at times? These systems are clever, but not super-intelligent, and they can make mistakes just like any other human of average intelligence.

And, hand on heart, do we really want to be surrounded by all-knowing miracle brains? Perhaps an AAI, artificial average intelligence, would be a better way to go, not least to preserve one’s own self-esteem or even humanity’s as a whole?

Lars Oestreicher, Senior Lecturer at the Department of Information Technology.

FOLLOW UPPSALA UNIVERSITY ON

Uppsala University on Facebook
Uppsala University on Instagram
Uppsala University on Youtube
Uppsala University on Linkedin