Predictive models that use data from individuals are an important source of information in medical settings. Predictive modelling, or the use of electronic algorithms to forecast future events, makes it possible to harness the power of big data to improve people’s health and reduce the cost of healthcare. However, this opportunity can raise policy, ethical, and legal questions.
An important area for the Prediction Modelling Group to explore is the involvement of service users, carers and other stakeholder in discussions about ethical issues and to develop guidelines for the development and use of prediction models in mental health.
Blog: What can humans do to guarantee an ethical AI in healthcare?
Dr Raquel Iniesta explores the current status of AI in healthcare in a two part blog. She looks at what we need to consider to ensure its application is ethical and current approaches that are helping this happen.
- Part I focuses on the reasons why we need an ethical framework to enable AI to work for healthcare.
- Part II focuses on what is being put in place to help enable our AI in healthcare to be ethical.
As part of Dr Iniesta’s NIHR Maudsley BRC funded work she has published a paper in the journal AI and Ethics which describes five facts that can help guarantee an ethical AI in healthcare. By providing this simple, evidence based explanation of ethical AI and who needs to be accountable she hopes to help provide guidance on the human action that ensures an ethical implementation of AI in healthcare. The five facts are as follows:
- The four classical ethical pillars of the medical profession are valid for assessing AI ethical risks in healthcare
- AI technologies are a complement and not a replacement of clinician’s knowledge
- Clinicians are accountable for their clinical decisions and their decisions are to be respected, regardless the assistance of an AI system
- The empowerment and education of patients is necessary for an ethical AI in healthcare
- Developers are accountable for the automated decisions provided by the tools they develop. Their awareness and education on the ethical concerns can ensure a better alignment between algorithms and values
A Roadmap for an Ethical AI in Healthcare conference
The Roadmap for an Ethical AI in Healthcare conference was held at the Science Gallery London on 14 November. It featured wide-ranging discussions exploring how we can tackle the ethical dilemmas of implementing Artificial Intelligence (AI) in clinical care.
It featured a stellar line-up of expert speakers from across government, academia, the medical sector, industry and patient community – including representatives from Google DeepMind; a former Member of the European Parliament; Deputy Director for Life Sciences and Innovation at the Welsh Government; alongside healthcare professionals and patient representatives.
ELAXIR cards
An international group led by Dr Raquel Iniesta have created a tool, ELAXIR, to improve AI literacy and raise awareness of the ethical challenges of using AI in healthcare. It comprises of a set of physical and digital cards, supported by complementary learning resources.
The tool aims to strengthen understanding of key terminology and concepts, as well as promote discussion and reflection among healthcare professionals, patients, the public, researchers, and AI developers.
ELAXIR stands for Ethical Learning of Artificial (eXplainable), Intelligence & Reflection.