Investigating the Duality of Interpretability and Explainability in Machine Learning
36th IEEE International Conference on Tools with Artificial Intelligence (ICTAI)
Published on September 13, 2024 by Moncef Garouani, Josiane Mothe, Ayah Barhrhouj and Julien Aligon
DOI: In pressAbstract
The rapid evolution of machine learning (ML) hasled to the widespread adoption of complex “black box” models, such as deep neural networks and ensemble methods. These models exhibit exceptional predictive performance, making them invaluable for critical decision-making across diverse domains within society. However, their inherently opaque nature raises concerns about transparency and interpretability, making them untrustworthy decision support systems. To alleviate such a barrier to high-stakes adoption, research community focus has been on developing methods to explain black box models as a means to address the challenges they pose. Efforts are focused on explaining these models instead of developing ones that are inherently interpretable. Designing inherently interpretable modelsfrom the outset, however, can pave the path towards responsible and beneficial applications in the field of machine learning. In this position paper, we clarify the chasm between explaining black boxes and adopting inherently interpretable models. We emphasize the imperative need for model interpretability and, following the purpose of attaining better (i.e., more effective or efficient w.r.t. predictive performance) and trustworthy predictors, provide an experimental evaluation of latest hybrid learning methods that integrates symbolic knowledge into neural network predictors. We demonstrate how interpretable hybrid models could potentially supplant black box ones in healthcare and economy domains.
Citation