For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.

AI-based decision support tools affect health providers’ clinical risk assessments, which are rooted in their professional autonomy as medical experts and can have important implications for the patient-provider relationship. On one hand, medical professional autonomy is based on legal responsibility and potential medical liability. On the other hand, patient autonomy underlies informed consent and the protection of privacy and human dignity of patients. Introducing AI health decision-making tools can critically affect the patient-medical professional legal relationship, which is built on trust and protecting autonomy. 

This post-doc project, nested in the UvA Research Priority Area on AI decision making, aims to establish a workflow and infrastructure for development of AI decision making methods developed using real-world data, evaluated in actual clinical settings, and guided by legal and ethical principles at every stage. The project examines how the patient-professional (legal) relationship is affected by introducing AI health decision-making and vice versa. The project also aims to evaluate whether the proposed AI models adhere to ethical and legal concerns and to understand the impacts of AI health decision-making on (perceived) privacy, data protection, trust and liability.

Support

UvA Research Priority Area on AI decision making in health.

Affiliated researchers

Dr. J.W. (James) Hazel III PhD

Faculty of Law

Gezondheidsrecht

Prof. dr. mr. A. (Anniek) de Ruijter

Faculty of Law

Gezondheidsrecht

Prof. mr. M.C. (Corrette) Ploem

Faculty of Law

Gezondheidsrecht