New interactive tool ELAXIR helps to empower patients in the ethical use of AI in healthcare

ELAXIR cards

Researchers have created a new tool, ELAXIR, to improve AI literacy and raise awareness of the ethical challenges of using AI in healthcare. It comprises of a set of physical and digital cards, supported by complementary learning resources.

The tool aims to strengthen understanding of key terminology and concepts, as well as promote discussion and reflection among healthcare professionals, patients, the public, researchers, and AI developers. ELAXIR stands for Ethical Learning of Artificial (eXplainable), Intelligence & Reflection.  

AI is increasingly being used in research and practice in healthcare  from AI approaches to analyse electronic health records to novel methods to power the use of remote monitoring. There is growing interest in the ethical use of AI in healthcare, including concepts such as patient data privacy, algorithmic bias, transparency, and human oversight. Building literacy around these principles is essential to ensuring AI is used responsibly as a valuable healthcare resource.  

The ELAXIR project was led by Dr Raquel Iniesta, Reader in Statistical Learning for Precision Medicine at the Institute of Psychiatry, Psychology & Neuroscience, King’s College London. The cards were co-designed with an international team of researchers, some with lived experience, developers, clinicians and with patient and public engagement and involvement (PPIE). The project received funding from RAI UK, UKRI and the NIHR Maudsley BRC.  

The basis of the ELAXIR tool is the “patient-extended” collaborative model (Iniesta, 2025), which proposes that to ensure the use of AI in healthcare is ethical, the patient must understand how AI works and their rights in this area, so that they are empowered to make decisions around the use of AI in their care. This requires collaboration between developers, clinicians, and patients  to design, implement and use AI safely, and with a patient-centred approach at its core.  

Dr Raquel Iniesta said: 

“Time magazine recently named the Architects of AI its ‘2025 Person of the Year’. While its potential is enormous, AI also poses serious risks, especially for vulnerable populations. There are growing concerns about mental health harms and, in rare and extreme cases, links to suicide or violence. We have created ELAXIR cards to empower everyone - not just experts - with the knowledge needed to understand these risks and use AI responsibly”. 

The ELAXIR deck of cards is made up of three parts: 

Master cards: These explain foundational concepts such as ‘AI’ and ‘Ethics. 

Scenario Cards: 13 thought-provoking scenarios co-designed by clinicians, developers, and patients to prompt discussion on the use of AI in healthcare. These are based on scenarios which could take place in a patient’s healthcare journey, for example 

  • Should AI be used to make decisions about a person’s treatment without them understanding how it works? 
  • What should a patient do when a Doctor and AI give conflicting advice? 
  • How should bias in AI be addressed, for example when AI works less well for people from minoritised communities?      

Ethical AI Dictionary: This is an A to Z which explains key terminology such as ‘algorithm and ‘risk’. The words are linked to the scenarios.  

ELAXIR cards

The cards are aimed at being accessible to a range of audiences: they are suitable for discussions with PPIE groups and can also be used with children from the age of 10 years and upwards. The cards are accompanied by a website – elaxircards.com - with additional resources, articles, podcasts and research expanding on the topic of ethical AI in healthcare.  

Since October 2023, Dr Raquel Iniesta has developed the “patient-extended” collaborative model. Central to this initiative is the commitment to educating all stakeholders (patients, clinicians and developers) on medical ethics and the fundamentals of AI. This approach seeks to prevent dehumanisation, while fostering empowerment within an AI-assisted, patient-centred healthcare system. 

This model is part of a ‘five-facts’ approach. This is a simple, evidence-based explanation of ethical AI and who needs to be accountable, which acts as a framework for ethical AI in healthcare, published in the ‘AI and Ethics’ journal in October 2023. Grounded in guidance from the World Health Organization (WHO) and current policies such as the EU AI Act and GDPR, the ELAXIR cards mark the next step in bringing this framework into practice. 

 

For a copy of the cards please contact raquel.iniesta@kcl.ac.uk  


Tags: Informatics - Patient and Carer Involvement and Engagement -

By NIHR Maudsley BRC at 15 Jan 2026, 10:47 AM


Back to Blog List