In a very short period of time, many areas of science have made a sharp transition towards data-driven methods. This new situation is clear in the life sciences and, as particular cases, in biomedicine, bioinformatics and healthcare.
You could see this as a perfect scenario for the use of data analytics, from multivariate statistics to machine learning (ML) and computational intelligence (CI), but this scenario also poses some serious challenges. One of them takes the form of (lack of) interpretability / comprehensibility / explainability of the models obtained through data analysis. This could be a bottleneck especially for complex nonlinear models, often affected by what has come to be known as the "black box syndrome".
In some areas such as medicine and healthcare, not addressing such challenge might seriously limit the chances of adoption, in real practice, of computer-based medical decision support systems (MDSS).
Interpretability and explainability have become hot research issues, and there are different reasons for that: One of them is the soaring success of deep learning artificial neural networks in recent years. These models risk not being adopted in areas where human decision is key and that decision must be explained as they are extreme "black box" cases. Another reason is the implementation of the European Union directive for General Data Protection Regulation (GDPR). Enforced in April 2018, it mandates a right to explanation of all decisions made by automated or artificially intelligent algorithmic systems. Needless to say, this directly involves data analytics and it is likely to have an impact on healthcare, medical decision making, and even in bioinformatics through the use of genomics in personalized medicine.
In this session, we call for papers that broach the topics of interpretability/ comprehensibility/ explainability of data models (with a non-reductive focus on ML and CI) in biomedicine, bioinformatics and healthcare, from different viewpoints, including:
Prof. Alfredo Vellido
- Enhancement of the interpretability of existing data analysis techniques in problems related to biomedicine, bioinformatics and healthcare.
- New methods of model interpretation/explanation in problems related to biomedicine, bioinformatics and healthcare.
- Case studies biomedicine, bioinformatics and healthcare in which interpretability/comprehensibility/explainability is a key aspect of the investigation.
- Methods to enhance interpretability in safety critical areas (such as, for instance, critical care).
- Issues of ethics and social responsibility (including governance, privacy, anonymization) in biomedicine, bioinformatics and healthcare.
Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center. Universitat Politècnica de Catalunya, Barcelona, Spain.
Prof. Sandra Ortega-Martorell
Department of Applied Mathematics, Liverpool John Moores University, Liverpool, UK.
Prof. Alessandra Tosi
Mind Foundry Ltd., Oxford, UK
Prof. Iván Olier Caparroso
MMU Machine Learning Research Lab, Manchester Metropolitan University, Manchester, UK