Experts from several university across 50 countries have published a landmark consensus paper introducing a framework for trustworthy and ethical AI in healthcare called FUTURE-AI.
The study, led by Prof. Karim Lekadir from the University of Barcelona, Spain, involved a diverse group of 117 experts from 50 countries who collaborated over three years to develop the FUTURE-AI framework.
Among them were representatives from leading universities and research institutions, including Imperial College London, the University of Oxford, the Technical University of Munich, Stanford University School of Medicine, Harvard Medical School, Helmholtz Munich and Macquarie University of Sydney.
The consensus paper provides guidelines for the development and deployment of trustworthy AI tools in healthcare. The framework includes best practices and recommendations covering the entire AI lifecycle, from design, development, and validation to regulation, deployment, and monitoring.
Six Key Principles of FUTURE-AI
In short, the FUTURE-AI framework serves as a code of practice for AI in healthcare, built around six fundamental principles:
Fairness: AI tools should work equally well for everyone, no matter their age, gender, or background.
Universality: AI tools should be adaptable to different healthcare systems and settings around the world.
Traceability: AI tools should be closely monitored to ensure they work as expected and can be fixed if problems arise.
Usability: AI tools should be easy to use and fit well into the daily routines of doctors and healthcare workers.
Robustness: AI tools should be trained with real world variations to be robust against real world variations. To remain accurate, the tools should be evaluated and optimized accordingly.
Explanability: AI tools should be able to explain their decisions clearly so doctors and patients can understand them.