Ricerca CERCA

Explaining and monitoring black-box machine learning models


Gruppi di ricerca DAUIN - GR-09 - GRAphics and INtelligent Systems - GRAINS

Descrizione Responsible use of AI should extend beyond development to cover the whole development cycle, from the initial data collection phase to model deployment and continuous update. The purpose of this thesis is to study, design, implement and experimentally evaluate techniques to explain and monitor “black box” ML models – that is models for which only the input and output is known and available at inference time. The thesis will focus in particular on the topics of concept drift detection and explainability. Specifically, it is known that the performance of ML models may degrade over time due to a class of phenomena generally denoted as model, concept or data drift. This applies both to conventional ML models, as well as Deep Learning (DL) , and to both structured (tables) and unstructured (image, text, etc.) inputs. The research activities will focus on identifying, designing and comparing effective distance metrics to evaluate when, to what extent, and possibly why the current distribution of the input and outputs of the model depart from a reference distribution. Many of the existing concept drift detection methods, especially for unstructured data, assume that the ML/DL model is known and compute the distribution of the input based on the inner features computed by the model itself. The goal of this thesis is to generalize and extend such methods to settings in which the ML model cannot be inspected.

Conoscenze richieste programming skills; experience with PyTorch or other deep learning framework); good analytical and mathematical skills.

Scadenza validita proposta 16/10/2022      PROPONI LA TUA CANDIDATURA

© Politecnico di Torino
Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY