In the context of a University wide research programme on AI Health Decisionmaking we are looking to understand in the context of two AI applications in the clinic, what the impact of AI decision-making is for professional autonomy and beyond, in the context of the patient-physician relationship. The legal research aims to use legal and qualitative case study analysis. Patient and medical professional autonomy is the legal and bioethical basis for medical decision-making. Patient autonomy underlies informed consent and the protection of privacy and human dignity of patients. Medical professional autonomy is based on understandings of legal responsibility and potential medical liability. Hence, the patient-medical professional legal relationship builds on protecting autonomy, as the key to their relationship of trust. In this regard the research will inform a legal analysis on the impacts of AI health decision-making on (perceived) privacy, data protection, trust and liability. The results will provide insights for the future development of new AI models and their clinical implementation.
You execute the research in close collaboration with the other project team members Professor Anniek de Ruijter at the Law Center for Health and Life and Professor Corette Ploem at the Amsterdam University Medical Centers, leading to at least 3 papers.
If needed, you engage autonomously in the collection of empirical data (interviews, document analysis):
The position is open to research minded (research) master graduates or PhD graduates. You have:
Your research will be carried out within the Law Center for Health and Life at the Faculty of Law (section Public Law) and in the context of the Research Priority Area, at least one day a week at the Amsterdam University Medical Centers.
The position is first and foremost a research position; yet the successful applicant will incidentally have the opportunity to be involved in teaching in the Health Law Masters at the Law School.