21 February 2025
AI is increasingly used in mental healthcare for tasks ranging from administrative support to digital therapies, patient monitoring, and professional decision-making. While AI systems can enhance accessibility, personalize treatments, and reduce healthcare burdens, they also pose significant ethical and practical challenges. These include privacy concerns, potential bias, oversurveillance, and the risk of depersonalizing care.
The study examines whether the recently adopted AI Act sufficiently addresses these concerns and offers recommendations for responsible AI implementation. Key proposals include regulation, transparency, human rights-focused approaches, and the active involvement of affected communities in shaping AI policies. Hannah van Kolfschooten’s contribution highlights the legal and policy dimensions of AI in mental healthcare, reinforcing the importance of safeguarding ethical standards and ensuring equitable access to care.