Voor de beste ervaring schakelt u JavaScript in en gebruikt u een moderne browser!
Je gebruikt een niet-ondersteunde browser. Deze site kan er anders uitzien dan je verwacht.
A recent article by Nehme et al. explores the regulatory challenges surrounding AI-powered medical chatbots, using the confIAnce chatbot as a case study. The study highlights the certification processes required under the EU Medical Device Regulation (MDR) and the Swiss Medical Devices Ordinance (MedDO), emphasizing key safeguards such as data protection and quality management. In a technical commentary, Hannah van Kolfschooten from Law for Health and Life builds on these insights by addressing the growing reliance on general-purpose AI, like ChatGPT, in medical contexts.

Unlike certified medical chatbots, these AI systems are not specifically designed for healthcare but are increasingly used for tasks like summarizing medical records and drafting patient communication. Van Kolfschooten warns that such tools pose risks, including misinformation, privacy breaches, and bias, as they lack the regulatory safeguards of certified medical AI. While the MDR, MedDO, and the EU AI Act impose strict oversight on purpose-built healthcare chatbots, general-purpose AI falls into a regulatory grey area. Van Kolfschooten calls for stricter policies and clinical guidelines to ensure the responsible use of AI in medicine and to safeguard patient safety in an evolving digital healthcare landscape. 

Mr. H.B. (Hannah) van Kolfschooten LLM

Faculteit der Rechtsgeleerdheid

Gezondheidsrecht