AICare
AICare examines the challenges on Swedish healthcare regulation posed by artificial intelligence, aiming to enhance it and safeguard the right to health.
Details
- Period: 2021-01-01 – 2026-12-31
- Budget: 6,000,000 SEK
- Funder: Marcus and Amalia Wallenberg Foundation, Marianne and Marcus Wallenberg Foundation
AI, automated systems & the right to health
Health emergencies not only result in personal tragedies but also have profound economic implications, sometimes with severe social consequences. Artificial intelligence (AI) can help address these challenges by facilitating the discovery of new cures treatments. AI can help enhance the planning of and timely response to healthcare needs, optimising the overall quality of healthcare, and at the same time reducing costs. AI also has the potential to free up medical staff from administrative and routine tasks, and allows increased patient participation.
The integration of digital technology in healthcare and the initiation of AI pilot projects are already tangible realities in Sweden. But the implementation of AI in healthcare faces legal challenges. The absence of specific laws and the significant uncertainty about the application of existing rules to new technology can become obstacles to innovation. Ensuring that AI healthcare solutions are universally available and accessible without discrimination, acceptable to patients, and adhere to quality standards to prevent privacy breaches, errors, malfunctions, or malicious acts is crucial.
Addressing these challenges, the AICare project focuses on examining the impact of AI on the right to health in the Swedish context. The project evaluates key elements of the right to health — availability, accessibility, acceptability, and quality — and its intersection with other fundamental rights. Taking into account the empirical insights from patients and healthcare professionals, AICare explores how AI challenges existing legal and ethical norms governing healthcare in Sweden. The project aims to propose viable solutions, whether through the revision or reinterpretation of current rules or the introduction of new measures. The overarching goal is to promote technological advancements while ensuring robust patient protection.
Collaborators
- Lund University, Sweden
- Uppsala University's Centre for Research Ethics & Bioethics, Sweden
- KTH Royal Institute of Technology, Sweden
- Linköping University, Sweden
People in the project
Michele Farisco
Researcher focusing on issues related to consciousness, artificial intelligence and neuroethics. Collaborates with neuroscientists, AI researchers and clinicians to develop indicators of consciousness to diagnose patients, and to recognize consciousness in animals and machines.
Jennifer Viberg Johansson
Associate professor in medical ethics, with a research focus on methods that measure people's preferences and how to balance preferences against other ethical values; artificial intelligence and digital health information.