PORTALE DELLA DIDATTICA

PORTALE DELLA DIDATTICA

PORTALE DELLA DIDATTICA

Elenco notifiche



Explainable and trustworthy AI

01HFOOV, 01HFOSM

A.A. 2025/26

Course Language

Inglese

Degree programme(s)

Master of science-level of the Bologna process in Ingegneria Informatica (Computer Engineering) - Torino
Master of science-level of the Bologna process in Data Science And Engineering - Torino

Course structure
Teaching Hours
Lecturers
Teacher Status SSD h.Les h.Ex h.Lab h.Tut Years teaching
Co-lectures
Espandi

Context
SSD CFU Activities Area context
ING-INF/05 6 D - A scelta dello studente A scelta dello studente
2024/25
Understanding a model inner workings and the reasons for its decisions is important to establish trust in the outcome of a machine learning process. However, many machine learning models (e.g., deep learning models) do not disclose their internal logic producing the prediction and for this reason are denoted as "black-box models". Explainability of a model, in its many facets, contributes to the robustness and reliability of any machine learning application. It provides support in most phases that lead from design to deployment of ML applications, ranging from model validation and testing, to model debugging and audit. Furthermore, explaining the outcome of ML algorithms can help end users both in understanding the reason for a decision and in trusting the model outcome. The course, elective for the Master in Computer Engineering and in Data Science and Engineering, will cover explanation methods for both predictive and explorative ML algorithms, with a specific attention to their exploitation in deployed machine learning frameworks. Experimental activities in lab will allow the practical evaluation of the presented explanation methods on real-world datasets, considering both the ML developer and final user perspectives.
Understanding a model inner workings and the reasons for its decisions is important to establish trust in the outcome of a machine learning process. However, many machine learning models (e.g., deep learning models) do not disclose their internal logic producing the prediction and for this reason are denoted as "black-box models". Explainability of a model, in its many facets, contributes to the robustness and reliability of any machine learning application. It provides support in most phases that lead from design to deployment of ML applications, ranging from model validation and testing, to model debugging and audit. Furthermore, explaining the outcome of ML algorithms can help end users both in understanding the reason for a decision and in trusting the model outcome. The course, elective for the Master in Computer Engineering and in Data Science and Engineering, will cover explanation methods for both predictive and explorative ML algorithms, with a specific attention to their exploitation in deployed machine learning frameworks. Experimental activities in lab will allow the practical evaluation of the presented explanation methods on real-world datasets, considering both the ML developer and final user perspectives.
- Knowledge of the notion of trust in the different steps of a ML pipeline - Knowledge of the explanation techniques for supervised learning - Knowledge of the explanation techniques for unsupervised learning - Knowledge of the main libraries that implement explanation methods - Ability to design, develop and evaluate methods to explain a ML algorithm outcome - Ability to design, develop and evaluate an explainable data science pipeline
- Knowledge of the notion of trust in the different steps of a ML pipeline - Knowledge of the explanation techniques for supervised learning - Knowledge of the explanation techniques for unsupervised learning - Knowledge of the main libraries that implement explanation methods - Ability to design, develop and evaluate methods to explain a ML algorithm outcome - Ability to design, develop and evaluate an explainable data science pipeline
- Knowledge of the Python language - Knowledge of ML algorithms - Ability to implement a data science pipeline
- Knowledge of the Python language - Knowledge of ML algorithms - Ability to implement a data science pipeline
- Introduction to trustworthy AI (0.6 cfu) Explanation is contextualized in the wider framework of trust in operational ML systems. The different roles of explanation are discussed. The notion of bias in data and algorithms is presented, together with its relationship with fairness. Finally the relationship between privacy and explanation is discussed. - Explaining ML models and algorithms (2.4 cfu) The covered topics will entail both model-agnostic and model-dependent techniques, local vs global explanation methods, and instance vs group explanation methods. A variety of explanation methods will be presented for different data types (structured, text, image, sequence, speech) and different algorithms (e.g., deep learning techniques). Different explanation forms (e.g., rules, feature importance) will be covered. Finally, the evaluation of explation quality will be introduced. Beyond prediction methods, explanation techniques for other ML algorithms (e.g., clustering, ranking) will be discussed. - Case study analysis and design in the lab (3.0 cfu) The course will feature both hands-on labs on lecture topics, and practical projects on explainable AI applications.
- Introduction to trustworthy AI (0.6 cfu) Explanation is contextualized in the wider framework of trust in operational ML systems. The different roles of explanation are discussed. The notion of bias in data and algorithms is presented, together with its relationship with fairness. Finally the relationship between privacy and explanation is discussed. - Explaining ML models and algorithms (2.4 cfu) The covered topics will entail both model-agnostic and model-dependent techniques, local vs global explanation methods, and instance vs group explanation methods. A variety of explanation methods will be presented for different data types (structured, text, image, sequence, speech) and different algorithms (e.g., deep learning techniques). Different explanation forms (e.g., rules, feature importance) will be covered. Finally, the evaluation of explation quality will be introduced. Beyond prediction methods, explanation techniques for other ML algorithms (e.g., clustering, ranking) will be discussed. - Case study analysis and design in the lab (3.0 cfu) The course will feature both hands-on labs on lecture topics, and practical projects on explainable AI applications.
The course includes lectures and practices on the lecture topics, and in particular on methods to provide explanation in the data science pipeline (3.6 cfu). Students will prepare a written report on a group project assigned during the course. The course includes laboratory sessions on design and evaluation of different explanation methods (1.2 cfu) and a case study in which students are asked to design, implement and evaluate a transparent data science pipeline (1.2 cfu). Laboratory sessions allow experimental activities on the most widespread commercial and open-source products.
The course includes lectures and practices on the lecture topics, and in particular on methods to provide explanation in the data science pipeline (3.6 cfu). Students will prepare a written report on a group project assigned during the course. The course includes laboratory sessions on design and evaluation of different explanation methods (1.2 cfu) and a case study in which students are asked to design, implement and evaluate a transparent data science pipeline (1.2 cfu). Laboratory sessions allow experimental activities on the most widespread commercial and open-source products.
Copies of the slides used during the lectures, examples of written exams and exercises, and manuals for the activities in the laboratory will be made available. All teaching material is downloadable from the course website or the teaching Portal.
Copies of the slides used during the lectures, examples of written exams and exercises, and manuals for the activities in the laboratory will be made available. All teaching material is downloadable from the course website or the teaching Portal.
Slides; Esercizi; Esercizi risolti; Esercitazioni di laboratorio; Video lezioni dell’anno corrente;
Lecture slides; Exercises; Exercise with solutions ; Lab exercises; Video lectures (current year);
Modalità di esame: Elaborato progettuale in gruppo; Prova scritta in aula tramite PC con l'utilizzo della piattaforma di ateneo;
Exam: Group project; Computer-based written test in class using POLITO platform;
... The exam includes a group project and a written part. The final score is defined by considering both the evaluation of the group project and the written part. The teacher may request an integrative test to confirm the obtained evaluation. Learning objectives assessment The written part will assess - the knowledge of the explanation techniques and their main characteristics - the working knowledge of the main libraries implementing explanation methods The group project will assess - the ability to design, implement and evaluate a complete data science pipeline and its explanation - the ability to effectively present the implemented project in written/oral form Exam structure and grading criteria The group project consists in designing and implementing a complete pipeline with the required explanation methods. The project is assigned before the start of the exam session and its score is valid for the entire academic year. The evaluation of the group project is based on the quality of both the proposed solution, and its written/oral presentation (e.g., motivation of design choices). The written part covers the theoretical part of the course. It includes multiple choice and box-to-fill questions, based on solving exercises related to the theoretical part of the course (explanation methods). For multiple choice questions, wrong answers are penalized. The score of each question will be specified in the exam text. The written exam lasts 90 minutes. Textbooks, notes, electronic devices of any kind are not allowed. The maximum grade for the group project is 16. The maximum grade for the written part is 16. The final grade is given by the sum of the two parts. The exam is passed if the grade of the group project is greater than or equal to 9, the grade of the written part is greater than or equal to 9, and the overall grade is greater than or equal to 18. If the final score is greater than or equal to 31 the registered score will be 30 with honor.
Gli studenti e le studentesse con disabilità o con Disturbi Specifici di Apprendimento (DSA), oltre alla segnalazione tramite procedura informatizzata, sono invitati a comunicare anche direttamente al/la docente titolare dell'insegnamento, con un preavviso non inferiore ad una settimana dall'avvio della sessione d'esame, gli strumenti compensativi concordati con l'Unità Special Needs, al fine di permettere al/la docente la declinazione più idonea in riferimento alla specifica tipologia di esame.
Exam: Group project; Computer-based written test in class using POLITO platform;
The exam includes a group project and a written part. The final score is defined by considering both the evaluation of the group project and the written part. The teacher may request an integrative test to confirm the obtained evaluation. Learning objectives assessment The written part will assess - the knowledge of the explanation techniques and their main characteristics - the working knowledge of the main libraries implementing explanation methods The group project will assess - the ability to design, implement and evaluate a complete data science pipeline and its explanation - the ability to effectively present the implemented project in written/oral form Exam structure and grading criteria The group project consists in designing and implementing a complete pipeline with the required explanation methods. The project is assigned before the start of the exam session and its score is valid for the entire academic year. The evaluation of the group project is based on the quality of both the proposed solution, and its written/oral presentation (e.g., motivation of design choices). The written part covers the theoretical part of the course. It includes multiple choice and box-to-fill questions, based on solving exercises related to the theoretical part of the course (explanation methods). For multiple choice questions, wrong answers are penalized. The score of each question will be specified in the exam text. The written exam lasts 90 minutes. Textbooks, notes, electronic devices of any kind are not allowed. The maximum grade for the group project is 16. The maximum grade for the written part is 16. The final grade is given by the sum of the two parts. The exam is passed if the grade of the group project is greater than or equal to 9, the grade of the written part is greater than or equal to 9, and the overall grade is greater than or equal to 18. If the final score is greater than or equal to 31 the registered score will be 30 with honor.
In addition to the message sent by the online system, students with disabilities or Specific Learning Disorders (SLD) are invited to directly inform the professor in charge of the course about the special arrangements for the exam that have been agreed with the Special Needs Unit. The professor has to be informed at least one week before the beginning of the examination session in order to provide students with the most suitable arrangements for each specific type of exam.
Esporta Word