C. Pachón García, P. Delicado, V. Vilaplana Besler
Interpretability is one of the hottest topics currently. The number of publications in this field has grown recently, since the complexity of the machine learning models has increased, in part due to the emergence of deep learning models.
For an image classification problem, the goal of a local interpretability method is to identify what parts of a certain image are the most relevant for the model to make a prediction. Although there are many local interpretability methods, there is not any method that explains an image model classification globally.
In this work we aim at developing a method to explain an image model classification globally, so that humans can understand what a machine learning model has learned when it classifies images.
Palabras clave: Interpretability, Explainability, Explainable Artificial Intelligence, Interpretable Machine Learning, XAI, IML, Artificial Intelligence, Deep Learning, Machine Learning
Programado
GT04 Análisis Multivariante y Clasificación IV. Latest Advances in Explainable Machine Learning
7 de junio de 2022 18:40
Sala de Claustros