There is an urgent need for more sustainable health care systems. The tech-revolution is changing the face of public health and health care: e.g. new medical devices, nanomedicine, eHealth and of course, we are on the brink of an AI revolution in the health sector. AI technologies can be deployed for arguably every aspect of healthcare and public health: from AI-software to detect breast cancer in screening mammograms and AI-algorithms predicting outbreaks of infectious diseases, to AI-powered wearable devices for remote patient monitoring and fully autonomous robotic surgeons. AI holds the promise to save billions of lives.
Artificial Intelligence (AI) provides opportunities for greater efficiency, while improving access to healthcare and its quality and safety through the automation of routine and simple tasks, including decision support tools. However, the emergence of AI-technology in the sphere of health harbours poses numerous threats to individual fundamental rights and health providers’ professional autonomy. We work on these challenges in the context of these projects:
The findings from this research line can have a broad impact on policy and practice. Key findings will be used to evaluate whether there are sufficient regulatory guarantees in the current laws and policy to at the EU level provide sufficient protection to the end-users of AI in health – patients. Other results will also provide insights for the future development of new AI models and their clinical implementation.
This research line contributes to one of UvA’s prestigious interfaculty Research Priority Areas: AI Health decision making, coordinated by the AmsterdamUMC together with the Faculty of Science (Informatics) and Humanities. It interlinks with the Human AI UvA programme.
Known hazards associated with AI such as discrimination, diminished privacy and opaque decision making, are exacerbated in the context of health. This is due to the vulnerability and dependency of patients and the potentially life-threatening effects of inaccurate or dysfunctional AI-technology used in the health environment. A lack of adequate regulation of health AI may compromise patients’ rights.
This PhD project aims to examine the ways by which health AI will be regulated at the EU level and explores legal challenges through a patients’ rights lens, using both legal and social sciences methods. To this end, this thesis analyses three case studies: the use of AI in public health surveillance, AI-driven medical imaging for diagnostics and the use of AI in clinical decision-making.
AI-based decision support tools affect health providers’ clinical risk assessments, which are rooted in their professional autonomy as medical experts and can have important implications for the patient-provider relationship. On one hand, medical professional autonomy is based on legal responsibility and potential medical liability. On the other hand, patient autonomy underlies informed consent and the protection of privacy and human dignity of patients. Introducing AI health decision-making tools can critically affect the patient-medical professional legal relationship, which is built on trust and protecting autonomy.
This PhD project, nested in the UvA Research Priority Area on AI decision making, aims to establish a workflow and infrastructure for development of AI decision making methods developed using real-world data, evaluated in actual clinical settings, and guided by legal and ethical principles at every stage. The PhD project examines how the patient-professional (legal) relationship is affected by introducing AI health decision-making and vice versa. The PhD project also aims to evaluate whether the proposed AI models adhere to ethical and legal concerns and to understand the impacts of AI health decision-making on (perceived) privacy, data protection, trust and liability.
UvA Research Priority Area on AI decision making in health