For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
A recent article by Nehme et al. explores the regulatory challenges surrounding AI-powered medical chatbots, using the confIAnce chatbot as a case study. The study highlights the certification processes required under the EU Medical Device Regulation (MDR) and the Swiss Medical Devices Ordinance (MedDO), emphasizing key safeguards such as data protection and quality management. In a technical commentary, Hannah van Kolfschooten from Law for Health and Life builds on these insights by addressing the growing reliance on general-purpose AI, like ChatGPT, in medical contexts.

Unlike certified medical chatbots, these AI systems are not specifically designed for healthcare but are increasingly used for tasks like summarizing medical records and drafting patient communication. Van Kolfschooten warns that such tools pose risks, including misinformation, privacy breaches, and bias, as they lack the regulatory safeguards of certified medical AI. While the MDR, MedDO, and the EU AI Act impose strict oversight on purpose-built healthcare chatbots, general-purpose AI falls into a regulatory grey area. Van Kolfschooten calls for stricter policies and clinical guidelines to ensure the responsible use of AI in medicine and to safeguard patient safety in an evolving digital healthcare landscape. 

Mr. H.B. (Hannah) van Kolfschooten LLM

Faculty of Law

Gezondheidsrecht