Ulf Danielsson: “Don't humanise technology”

Ulf Danielsson in front of a black board with an equation.

“To gain that deeper insight to move forward, you sometimes need to go through calculations and reasoning yourself,” says Ulf Danielsson. Photo: Mikael Wallerstedt

The debate around AI often humanises technology. We see AI as superior to the human brain, which is either our salvation or threatens our downfall. But there are major differences between human and artificial intelligence, notes Ulf Danielsson. And perhaps the great danger is that we stop thinking for ourselves out of sheer convenience.

Ulf Danielsson is a Professor of Theoretical Physics and author of several popular science books. He has had a longstanding interest in issues concerning consciousness and knowledge, which he also wrote about in his book Världen själv (‘The World Itself’). So, when the debate on AI started last spring, he was well-prepared. In an opinion piece in DN, he warned against humanising the new technology.

“This is part of a larger movement that lacks the ability to distinguish between reality and the virtual. It confuses the mathematical model or computer simulations with reality. It also suggests that human thinking can be fully represented by a computer programme, or an algorithm.”

Overestimated machines

Danielsson is surprised that so many scientists and philosophers have been drawn into this way of thinking. Some see it as the salvation of the human race that thinking machines are taking over the world. Others see it as a terrifying development. All of this is based on an overestimation of what machines can achieve, notes Danielsson.

“There is no doubt that this new technology, like all new technologies, will be used for both good and bad. But take Chat-GPT, we don’t talk about it much anymore. Six months ago, the end of the world seemed imminent, according to some, but now it doesn’t seem so interesting.”

Clear shortcomings and problems

One reason, he believes, is that Chat-GPT has clear shortcomings and the problems are persisting several months later. This suggests that it is not an exponential development, with systems getting smarter and smarter all the time and suddenly taking over.

“Initial fears were exaggerated. However, there are of course different ways to abuse the technology, such as through social media. You can pretend to be someone else and then it almost becomes a weapon. We can also imagine autonomous missiles in the military industry.”

Ulf Danielsson leaning against a grey wall, with white background.

it is important to have realistic expectations of what the AI-systems can do and to have a well-defined problem, says Danielsson. Photo: Mikael Wallerstedt

At the same time, AI can be used for many good purposes, such as research into how to optimise energy systems and solve the climate crisis. But it is important to have realistic expectations of what the systems can do and to have a well-defined problem, continues Danielsson.

“It should be a problem within certain frameworks, with defined rules. When much is left open-ended and the rules are not really clear, things don’t go well.”

System at risk of overuse

His own research area of theoretical physics also uses a type of AI known as symbol management systems. These can solve equations in a very efficient way and he would welcome any improvements – as long as they are reliable.

“At the same time, there is a risk that people get drawn into overusing the system. I see this sometimes among students, doctoral students and even postdocs. They are very good at using these systems, but may be using them too much.”

Some of the time they spend working on the computer programme could be spent gaining a deeper understanding themselves. This would make it easier to ask the right questions and gain interesting answers.

“To gain that deeper insight to move forward, you sometimes need to go through calculations and reasoning yourself. The computer programme may get through it in a fraction of a second while you might have to sit for a long time, but it’s worth it.”

Lower rate of innovation

Early this year, Danielsson wrote an article about the apparent stagnation of science. There are studies showing that the rate of innovation in research has decreased over the course of a few decades. Research is increasingly about reproducing what others have done and less about new ideas.

“Whatever the reason, there is a risk that AI will reinforce that behaviour. It is not certain that pattern-recognition AI systems will make the really big discoveries,” adds Danielsson.

This is why he has got involved in the debate surrounding AI.

“This particular issue links in a fundamental way to what science is, how we see ourselves and what we want for society and the future. This is why it is so important that we describe the systems accurately and do not humanise them.”

Annica Hulth

Prenumerera på Uppsala universitets nyhetsbrev

FÖLJ UPPSALA UNIVERSITET PÅ

facebook
instagram
twitter
youtube
linkedin