01URTOV

A.A. 2022/23

Course Language

Inglese

Course degree

Master of science-level of the Bologna process in Ingegneria Informatica (Computer Engineering) - Torino

Course structure

Teaching | Hours |
---|---|

Lezioni | 40 |

Esercitazioni in laboratorio | 20 |

Tutoraggio | 19,5 |

Teachers

Teacher | Status | SSD | h.Les | h.Ex | h.Lab | h.Tut | Years teaching |
---|---|---|---|---|---|---|---|

Cumani Sandro | Ricercatore a tempo det. L.240/10 art.24-B | ING-INF/05 | 40 | 0 | 20 | 0 | 3 |

Teaching assistant

Context

SSD | CFU | Activities | Area context |
---|---|---|---|

ING-INF/05 | 6 | B - Caratterizzanti | Ingegneria informatica |

2022/23

The course aims at providing a solid introduction to machine learning, a branch of artificial intelligence that deals with the development of algorithms able to extract knowledge from data, with a focus on pattern recognition and classification problems. The course will cover the basic concepts of statistical machine learning, both from the frequentist and the Bayesian perspectives, and will be focused on the broad class of generative linear Gaussian models and discriminative classifiers based on logistic regression and support vector machines. The objective of the course is to provide the students with solid theoretical bases that will allow them to select, apply and evaluate different classification methods on real tasks. The students will also acquire the required competencies to devise novel approaches based on the frameworks that will be presented during the classes. The course will include laboratory activities that will allow the students to practice the theoretical notions on real data using modern programming frameworks that are widely employed both by research communities and companies.

The course aims at providing a solid introduction to machine learning, a branch of artificial intelligence that deals with the development of algorithms able to extract knowledge from data, with a focus on pattern recognition and classification problems. The course will cover the basic concepts of statistical machine learning and will concentrate on the broad class of generative linear Gaussian models and discriminative classifiers based on logistic regression and support vector machines. The objective of the course is to provide the students with solid theoretical bases that will allow them to select, apply and evaluate different classification methods on real tasks. The students will also acquire the required competencies to devise novel approaches based on the frameworks that will be presented during the classes. The course will include laboratory activities that will allow the students to practice the theoretical notions on real data using modern programming frameworks that are widely employed both by the research communities and companies.

At the end of the course the students will
- know and understand the basic principles of statistical machine learning applied to pattern recognition and classification;
- know the principal techniques for classification, including generative linear Gaussian models and discriminative approaches based on logistic regression and support vector
machines, among others;
- understand the theoretical motivations behind different classification approaches, their main properties and domain of application, and their limitations;
- be able to implement the different algorithms using wide-spread programming frameworks (Python)
- be able to apply different methods to real tasks, to critically evaluate their effectiveness and to analyze which strategies are better suited to different applications;
- be able to transfer the acquired knowledge and capabilities to solve novel classification problems, developing novel methods based on the frameworks that will be discussed
during classes

At the end of the course the students will:
- know and understand the basic principles of statistical machine learning applied to pattern recognition and classification;
- know the principal techniques for classification, including generative linear Gaussian models and discriminative approaches based on logistic regression and support vector machines, among others;
- understand the theoretical motivations behind different classification approaches, their main properties and domain of application, and their limitations;
- be able to implement the different algorithms using wide-spread programming frameworks (Python);
- be able to apply different methods to real tasks, to critically evaluate their effectiveness and to analyze which strategies are better suited to different applications;
- be able to transfer the acquired knowledge and capabilities to solve novel classification problems, developing novel methods based on the frameworks that will be discussed during classes

The students should have basic knowledge of probability and statistics, linear algebra and calculus.

The students should have basic knowledge of probability and statistics, linear algebra, calculus and programming.

Machine learning and pattern recognition
- Introduction and definitions
Probability theory concepts
- Random Variables
- Estimators
- The Bayesian framework
Introduction to Python
- The language
- Main numerical libraries
Decision Theory
- Inference, expected loss
- Model taxonomy: generative and discriminative approaches
- Model optimization, hyperparameter selection, cross-validation
Model evaluation
- Classification scores and log-likelihood ratios
- Detection Cost Functions and optimal Bayes decisions
Dimensionality reduction
- Principal Component Analysis (PCA)
- Linear Discriminant Analysis (LDA)
Generative Gaussian models
- Generative Gaussian classifiers: univariate Gaussian, Naive Bayes, multivariate Gaussian (MVG)
- Tied covariance MVG and LDA
Logistic Regression (LR)
- From Tied MVG to LR
- LR as ML solution for class labels
- Binary and multiclass cross-entropy
- From MVG to Quadratic LR
- LR as empirical risk minimization
- Overfitting and regularization
Support Vector Machines (SVM)
- Optimal classification hyperplane: the maximum margin definition
- Margin maximization and L2 regularization
- SVM as minimization of classification errors
- Primal and dual SVM formulation
- Non linear extension: brief introduction to kernels
Density estimation and latent variable models
- Gaussian mixture models (GMM)
- The Expectation Maximization algorithm
Continuous latent variable models: Linear-Gaussian Models
- Linear regression
- Linear regression and Tied MVG
- MVG with unknown class means: Probabilistic LDA (PLDA)
- Bayesian MVG
- Factor Analysis: PLDA, Probabilistic PCA
Approximated inference basics
- Variational Bayes

Introduction
- Machine learning and pattern recognition
- Probability theory concepts
- Python: language, main numerical libraries
Decision Theory
- Inference and decisions
- Model taxonomy: generative and discriminative approaches
- Model optimization, hyperparameter selection, cross-validation
Model evaluation
- Classification scores and log-likelihood ratios
- Detection Cost Functions and optimal Bayes decisions
Dimensionality reduction
- Principal Component Analysis (PCA)
- Linear Discriminant Analysis (LDA)
Generative models
- Generative Gaussian classifiers: univariate Gaussian, Naive Bayes, multivariate Gaussian (MVG)
- Tied covariance MVG and LDA
- Categorical and Multinomial classifiers
Logistic Regression (LR)
- Tied MVG and LR
- LR as Maximum Likelihood solution for class labels
- Binary and multiclass cross-entropy
- LR as empirical risk minimization
- Overfitting and regularization
- MVG and Quadratic LR
Support Vector Machines (SVM)
- Optimal classification hyperplane: the maximum margin definition
- Soft margin and L2 regularization
- Primal and dual SVM formulation
- Non linear extension: brief introduction to kernels
Density estimation and latent variable models
- Gaussian mixture models (GMM)
- The Expectation Maximization (EM) algorithm

The course will include 3 hours of lectures and 1,5 hours of laboratory per week. The lectures will focus both on theoretical and practical aspects, and will include open discussions aimed at developping suitable solutions for different problems. The laboratories will allow the students to implement most of the techniques that will be presented during the lectures, and to apply the learned methods to real data.

The course will include 3 hours of lectures and 1,5 hours of laboratory per week. The lectures will focus both on theoretical and practical aspects, and will include open discussions aimed at developing suitable solutions for different problems. The laboratories will allow the students to implement most of the techniques that will be presented during the lectures, and to apply the learned methods to real data.

[1] Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg.
[2] Kevin P. Murphy. 2012. Machine Learning: A Probabilistic Perspective. The MIT Press.
Additional material, including slides and code fragments, will be made available on the course website.

[1] Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg.
[2] Kevin P. Murphy. 2012. Machine Learning: A Probabilistic Perspective. The MIT Press.
Additional material, including slides and code fragments, will be made available on the course website.

Gli studenti e le studentesse con disabilitą o con Disturbi Specifici di Apprendimento (DSA), oltre alla segnalazione tramite procedura informatizzata, sono invitati a comunicare anche direttamente al/la docente titolare dell'insegnamento, con un preavviso non inferiore ad una settimana dall'avvio della sessione d'esame, gli strumenti compensativi concordati con l'Unitą Special Needs, al fine di permettere al/la docente la declinazione pił idonea in riferimento alla specifica tipologia di esame.

The exam will assess the knowledge of the course topics, and the ability of the candidate to apply such knowledge and the developed skills to solve specific problems.
The exam will consist in two parts:
- A project to be developed during the course. The students will be able to choose individual or (small) group projects among a set of possible choices (max. 10 points).
- A written examination (max. 20 points).
The final mark will be the sum of the report and written exam marks. To pass the exam, the report mark must be at least 5/10, the written exam mark must be at least 10/20, and the final mark must be at least 18/30.
The projects will address specific classification tasks. For each project, a dataset will be provided, and the students will have to develop suitable models based on the topics presented during lectures. Each candidate will have to provide a technical report detailing the employed methodology and a critical analysis of the obtained results. The report will assess:
- The degree of understanding of the theoretical principles of statistical machine learning for pattern recognition
- The ability of the student to analyze a specific problem, assessing which approaches, among those that have been presented, are more suited to solve the task
- The ability of the student to implement, apply and possibly extend the studied methods to devise suitable classifiers for the specific case study
- The ability of the student to critically evaluate the effectiveness of the proposed approaches.
The written examination will consists of open questions covering the topics presented during the lectures. The written examination will assess:
- The theoretical understanding of the basic principles of statistical machine learning for pattern recognition
- The knowledge and understanding of the different approaches that have been presented during the lectures
- The ability of the student to critically analyze and evaluate the different approaches.

In addition to the message sent by the online system, students with disabilities or Specific Learning Disorders (SLD) are invited to directly inform the professor in charge of the course about the special arrangements for the exam that have been agreed with the Special Needs Unit. The professor has to be informed at least one week before the beginning of the examination session in order to provide students with the most suitable arrangements for each specific type of exam.

© Politecnico di Torino

Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY

Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY