KEYWORD |
Development of tools to mitigate bias in automated decision making
keywords ARTIFICIAL INTELLIGENCE, AUTOMATED DECISION MAKING, DATA BIAS, DATA ETHICS, FAIR MACHINE LEARNING
Reference persons MARCO TORCHIANO, ANTONIO VETRO'
Research Groups Centro Nexa su Internet & Società, DAUIN - GR-16 - SOFTWARE ENGINEERING GROUP - SOFTENG, DAUIN - GR-22 - Nexa Center for Internet & Society - NEXA
Description Automated decision-making (ADM) systems may affect multiple aspects of our lives. In par- ticular they can result in systematic discrimination of specific population groups, in violation of the EU Charter of Fundamental Rights. One of the potential causes of discriminative be- havior, i.e. unfairness, lies in the quality of the data used to train such ADM systems.
Using a data quality measurement approach combined with risk management, both defined in ISO standards, the work focuses on the identifcation of techniques to measure and mitigate the risk of unfairness and discimination in ADM systems.
Required skills Basic concepts of data analysis, programming in R or python
Deadline 09/03/2022
PROPONI LA TUA CANDIDATURA