Prediction Modelling Presentation: Maya Maslej
Speaker Biography
Marta Maslej, PhD, is a Staff Scientist with The Krembil Centre for Neuroinformatics at CAMH and Assistant Professor in the Department of Psychiatry at The University of Toronto. She co-leads the Predictive Care Team, which uses interdisciplinary methods to study impacts of AI on mental health, with a focus on responsible integration and health equity. As part of this work, she uses AI to derive insights from clinical data, with the aim of improving assessment, informing treatment decisions, and identifying and mitigating bias.
Abstract for the presentation
Artificial intelligence (AI) is increasingly being explored to support a wide range of applications in mental health. A major focus of this work has been on training machine learning models to predict clinical outcomes and risks, with the goal of enhancing decision-making around assessment and care, referred to as clinical decision support. The hope is that integrating predictions into the clinic at the point of decision-making will meaningfully impact patient care, translating to more accurate assessment, personalized treatment, and targeted prevention or intervention. However, a growing body of evidence shows that AI models can reinforce or even amplify existing biases, particularly against marginalized or underrepresented patient groups. Clinical oversight, or the presence of a ‘human in the loop’, is often proposed as a safeguard, but it remains unclear whether this approach is sufficient to prevent AI from exacerbating health inequities when deployed. This presentation will explore the health equity implications of AI-based clinical decision support for assessing inpatient violence risk in acute psychiatry. Drawing on analyses of electronic health records at the Centre for Addiction and Mental Health (CAMH), findings demonstrate how social and systemic biases can become embedded in training datasets, which, in turn, influence AI model predictions in ways that disadvantage socially and racially marginalized patients. The presentation will also share insights from experiments in human-computer interaction, which suggest that clinical oversight may not be sufficient to mitigate these biases. The presentation concludes by emphasizing the need to re-imagine how AI is designed and deployed in contexts where it runs the risk of exacerbating existing health inequities.
Please contact maudsley.brc@kcl.ac.uk if you have any questions.
Prediction Modelling Presentations
The Prediction Modelling Group at the NIHR Maudsley BRC hosts monthly online presentations on Machine Learning and Prediction Modelling and their applications to solve healthcare problems. Speakers from the UK and abroad present their works on developing and/or using machine learning and prediction modelling methods to answer questions as how to choose the best treatment for a patient or how to improve the diagnosis of a disease. Find out more: www.maudsleybrc.nihr.ac.uk/facilities/prediction-modelling-group/