Research Line: Health Innovations
AI-based decision support tools affect health providers’ clinical risk assessments, which are rooted in their professional autonomy as medical experts and can have important implications for the patient-provider relationship. On one hand, medical professional autonomy is based on legal responsibility and potential medical liability. On the other hand, patient autonomy underlies informed consent and the protection of privacy and human dignity of patients. Introducing AI health decision-making tools can critically affect the patient-medical professional legal relationship, which is built on trust and protecting autonomy.
This post-doc project, nested in the UvA Research Priority Area on AI decision making, aims to establish a workflow and infrastructure for development of AI decision making methods developed using real-world data, evaluated in actual clinical settings, and guided by legal and ethical principles at every stage. The project examines how the patient-professional (legal) relationship is affected by introducing AI health decision-making and vice versa. The project also aims to evaluate whether the proposed AI models adhere to ethical and legal concerns and to understand the impacts of AI health decision-making on (perceived) privacy, data protection, trust and liability.
UvA Research Priority Area on AI decision making in health.