KEYWORD |
Application of Explainable Artificial Intelligence Techniques for the Assessment of Ethical Fairness in Music Recommendation Systems
keywords EXPLAINABLE AI, RECOMMENDER SYSTEMS
Reference persons CRISTINA EMMA MARGHERITA ROTTONDI, VERA TRIPODI
Research Groups Telecommunication Networks Group
Description The design and operation of Music Recommendation Systems (MRSs) raise several ethical concerns, as they are intended to offer musical content based on users’ behavior and preferences. One is the creation of echo chambers, where users are continuously presented with music that reinforces their existing tastes, which may potentially limit exposure to new ideas and diverse cultures. Intrinsic ethical biases may also occur, where the algorithms might favor certain artists or genres, thus influencing the music industry and the success of artists. In particular, issues may emerge regarding “gender fairness”, namely discrepancies in algorithm performance between male and female user groups; and “group fairness”, i.e. potential cultural bias where recommendation algorithms may systematically and unfairly discriminate against certain individuals or groups. Lastly, the need for transparency raises: users typically have little insight into how recommendations are generated and what underlying models they represent, which can affect trust in the service.
Trustworthy recommender systems are a focus of European policies and the global debate on regulating AI technology: ethical, legal, and regulatory perspectives are increasingly being intertwined with technical considerations on how to address these challenges [6], since adherence to
regulatory constraints leads to significant practical implications for the design and operation of such systems.
To address ethics-related concerns, the adoption of Explainable AI (XAI) methods to investigate the behavior of MRSs has recently been envisioned. XAI frameworks allow human users to better comprehend the results produced by machine learning algorithms by exposing the internal reasoning
of machine learning (ML) models, which are typically operated as “black boxes,” thus ensuring better understanding and enabling effective monitoring and management. Therefore, XAI is deemed as a crucial building block for ensuring ethical and fair adoption of ML models. Motivated by the above reported considerations, this project aims at the realization of a proof-of-concept framework for the assessment of ethical fairness of MRSs, based on XAI algorithms.
The framework will leverage open-source MRSs and publicly-available datasets for their training and testing and will incorporate XAI mechanisms to enable the ethical assessment of the recommendation mechanisms. The analysis of the obtained results will lead to the definition of implementation
guidelines for the realization of “ethical-by-design” MRSs. More precisely, from an ethical perspective, this analysis will fit into and adopt the so-called Value Sensitive Design (VSD), namely the theoretical approach to technological design that aims to synthesize methods and applications in a
design process that involves ethical values at every stage [9, 10]. In the discussion in ethics of technology, the VSD turned out to be a beneficial approach not only for minimizing discriminatory issues in design and detecting or countering biases in software systems, but also to address their cognitive and social origin. In computer ethics, in particular, the VSD was developed explicitly with the aim of addressing design’s ethical aspects and to examine the ways in which ethical values (such as fairness, fairness in machine learning, inclusivity, accountability, and transparency) can be incorporated into the design.
Required skills python programming; basics knowledge on machine learning algorithms;
Deadline 09/10/2025
PROPONI LA TUA CANDIDATURA