C. Pachón García, P. Delicado, V. Vilaplana Besler
Interpretability is one of the hottest topics currently. The number of publications in this field has grown recently, since the complexity of the machine learning models has increased, in part due to the emergence of deep learning models.
For an image classification problem, the goal of a local interpretability method is to identify what parts of a certain image are the most relevant for the model to make a prediction. Although there are many local interpretability methods, there is not any method that explains an image model classification globally.
In this work we aim at developing a method to explain an image model classification globally, so that humans can understand what a machine learning model has learned when it classifies images.
Keywords: Interpretability, Explainability, Explainable Artificial Intelligence, Interpretable Machine Learning, XAI, IML, Artificial Intelligence, Deep Learning, Machine Learning
Scheduled
GT04 Multivariate Analysis and Classification IV. Latest Advances in Explainable Machine Learning
June 7, 2022 6:40 PM
Cloister room