Complex decisions require human skills

If we let machines make decisions that are related to what it means to be a human being, we risk undermining people’s rights. This is the view of Jenny Eriksson Lundström, researcher at the Department of Informatics and Media at Uppsala University. Photo: Gettyimages.
Jenny Eriksson Lundström has studied the risks and opportunities for the individual now that government agencies are increasing their use of AI-based automated decision-making. “A machine can successfully make very simple decisions, but more complex decisions demand human connection. A machine lacks the ability to relate to what it means to be a human being.”

Jenny Eriksson Lundström, researcher at the Department of Informatics and Media. Photo: Mikael Wallerstedt, Uppsala University.
Today, AI technologies are being used in the public sector for administrative cases and as support in the various steps that lead up to a decision. This adds a transparency in that it shows the pathway to the decision.
“With these technologies, it’s clear what is correct or incorrect and what factors must be weighed in. We can make these kinds of simple cases more efficient with the help of rules-based AI,” says researcher Jenny Eriksson Lundström of the Department of Informatics and Media at Uppsala University.
Some decisions require human judgement
There is no such thing as fully automated AI-based decision-making in the exercise of public authority in Sweden. The risk assessments and profiling used in the Public Employment Service are the closest we get to this.
“Regardless of the technology, we cannot place the responsibility on a machine to make sensitive decisions whose consequences are difficult to foresee and where human judgement is required. An AI system can follow rules, compile a large amount of information and report results. But a machine has no experience of being human and therefore lacks the ability to recognise what it means to be a human being,” says Jenny.
An example from the USA shows that when algorithms were allowed to decide which prisoners should be released early on parole, African-American prisoners were systematically discriminated against, because the AI based the decision on socio-economic factors that are linked to race and class, such as which neighbourhoods the prisoners came from.
“The algorithms favoured individuals who should not have been released early. AI is good at compiling the information that exists in a system – and seeing patterns. But assessing the consequences of a decision in relation to our grounds for discrimination, for example? It shouldn’t be responsible for that,” says Jenny.
Section of law and democracy
“The officials I interviewed made it very clear that, for this reason, complex decisions cannot be made by an AI machine. And we ought to listen to them. These officials, who exercise public authority, feel responsible for their fellow human beings and want to weigh in all the factors that the law says should be taken into account in order for the decision to be the correct one. If there is scope in the law to use ethical reasoning in their assessments, they want to utilise this,” she explains.
The interviewees saw many benefits from using AI in their work – as support.
“In the interviews I conducted, somewhat jokingly they referred to AI systems responsible for fully automated decision-making as ‘black boxes’, because you can’t follow the different steps that the system takes and so you can’t really know what produced the result,” says Jenny.
It’s important that we understand how the systems we use actually work.
“We attribute intelligence to the machine when it’s actually us humans who determine what we allow a machine to be,” she says.
In her research, Jenny Eriksson Lundström has identified four important factors that should be monitored in particular in AI-based decision-making. A government agency decision must be:
- Materially correct. All the relevant facts must be there, weighed in and verifiable.
- Ethical. A decision should be ethically correct.
- Must be explainable. It must be possible to understand and explain every decision, and be able to demonstrate the basis on which the decision was made in accordance with the principle of public access to official records.
- Secure. The data processed for a decision affecting a resident of Sweden needs to be processed securely and comply with privacy law.
“It’s also a matter of democracy. If we let machines make decisions that are related to what it means to be a human being, we risk undermining people’s rights,” according to Jenny Eriksson Lundström.
“The important thing is that we understand that residents are not one homogeneous group. Our circumstances and experiences are very different and therefore human contact is essential for such decisions.”
Gunilla Styr
Facts
Jenny Eriksson Lundström, Department of Informatics and Media, received research funding in 2024 within the AI4Research project for her research project Automating the Exercise of Public Authority. The project has an ethical focus and in it Jenny cooperates with Charles M. Ess, emeritus professor of Media Studies, University of Oslo.
The project is a spin-off from a previous sub-project BioMe‑thical Decisions within the Biome project led by Amanda Lagerkvist, professor at the Department of Informatics and Media at Uppsala University, and funded by WASP HS.