A. García Galindo, I. Unceta, I. Cordon
As the use of machine learning to develop automated decision systems in social and sensitive contexts grows, ethical concerns about its implications and performance have received interest. In this work, we explore how the bias inherited by a machine learning model can be succesfully addresed and solved by the means of specific fairness-aware methods. Specifically, we will address a use case based on the prediction of diabetes in intensive care units. We will seek to develop fair models that are able to show equitable results regardless of the demographic group to which each patient belongs.
Palabras clave: algorithmic fairness, bias in machine learning, diabetes prediction
Programado
Pósteres IV
10 de junio de 2022 10:10
Hall de la Facultad