01TVOMV, 01TVONG, 01TVOPF, 01TVOQW

A.A. 2020/21

Course Language

Inglese

Course degree

Master of science-level of the Bologna process in Ingegneria Biomedica - Torino

Master of science-level of the Bologna process in Ingegneria Matematica - Torino

Master of science-level of the Bologna process in Physics Of Complex Systems (Fisica Dei Sistemi Complessi) - Torino/Trieste/Parigi

Master of science-level of the Bologna process in Mechatronic Engineering (Ingegneria Meccatronica) - Torino

Course structure

Teaching | Hours |
---|---|

Lezioni | 20 |

Esercitazioni in aula | 40 |

Tutoraggio | 39 |

Teachers

Teacher | Status | SSD | h.Les | h.Ex | h.Lab | h.Tut | Years teaching |
---|---|---|---|---|---|---|---|

Calafiore Giuseppe Carlo | Professore Ordinario | ING-INF/04 | 20 | 40 | 0 | 0 | 3 |

Teaching assistant

Context

SSD | CFU | Activities | Area context |
---|---|---|---|

ING-INF/04 | 6 | D - A scelta dello studente | A scelta dello studente |

2020/21

Machine Learning (ML) encompasses a variety of methodologies and computational algorithms, mainly grounded in Bayesian statistics, for extracting information, clustering, detecting patterns, making decisions and predictions or, more generally, understanding phenomena from available data. Classical learning models created in the 1970s, such as Neural Networks, as well as later techniques such as Support Vector Machines (SVM), are witnessing a new wave of resurgence in both theory and applications in the present era of Big Data, where the deluge of unstructured information calls for automated and highly efficient methods of data analysis. Contemporary Machine Learning, in turn, constitutes an essential part of Data Science, an interdisciplinary field for which industry has a global excess demand of experts.
In this course, we present the main tools for supervised learning (regression, regularization, classification) and unsupervised learning (clustering, dimensionality reduction), with a focus on the structure and features of the optimization algorithms that are needed to actually solve numerically the learning problems of interest. The course is structured into lectures in class, in which the context and methodologies are explained, and computer lab sessions, in which the students apply the methodologies to real-world data sets and problems coming from various fields, such as finance, business analytics, news, biology, medical diagnosis, etc.

Machine Learning (ML) encompasses a variety of methodologies and computational algorithms, mainly grounded in Bayesian statistics, for extracting information, clustering, detecting patterns, making decisions and predictions or, more generally, understanding phenomena from available data. Classical learning models created in the 1970s, such as Neural Networks, as well as later techniques such as Support Vector Machines (SVM), are witnessing a new wave of resurgence in both theory and applications in the present era of Big Data, where the deluge of unstructured information calls for automated and highly efficient methods of data analysis. Contemporary Machine Learning, in turn, constitutes an essential part of Data Science, an interdisciplinary field for which industry has a global excess demand of experts.
In this course, we present the main tools for supervised learning (regression, regularization, classification) and unsupervised learning (clustering, dimensionality reduction), with a focus on the structure and features of the optimization algorithms that are needed to actually solve numerically the learning problems of interest. The course is structured into lectures in class, in which the context and methodologies are explained, and computer lab sessions, in which the students apply the methodologies to real-world data sets and problems coming from various fields, such as finance, business analytics, news, biology, medical diagnosis, etc.

The student will acquire knowledge of the basic tools used in machine learning, and an introductory insight on the functioning of the optimization algorithms that form the inner computational “engine” of these tools. The student will gain some experience in visualizing and analyzing labeled and unlabeled high-dimensional data sets and in extracting useful information from them. Complementing a student’s background in statistics, optimization or data mining, this course will help forming the skills of a Junior Data Scientist.

The student will acquire knowledge of the basic tools used in machine learning, and an introductory insight on the functioning of the optimization algorithms that form the inner computational “engine” of these tools. The student will gain some experience in visualizing and analyzing labeled and unlabeled high-dimensional data sets and in extracting useful information from them. Complementing a student’s background in statistics, optimization or data mining, this course will help forming the skills of a Junior Data Scientist.

Good knowledge of linear algebra, geometry, calculus and some exposure to probability and statistics is required. A previous course on numerical computing, optimization, or operations research is recommended but not strictly required.

Good knowledge of linear algebra, geometry, calculus and some exposure to probability and statistics is required. A previous course on numerical computing, optimization, or operations research is recommended but not strictly required.

Introduction to Machine Learning. Supervised and unsupervised learning. Parametric and nonparametric models. Classical examples in pattern analysis (e.g., hand-writing recognition). A brief historical perspective.
Review of probability theory and statistics. Marginal and conditional distributions. Bayes theorem. Prior, likelihood, posterior. Bayesian inference.
Regression problems. Over-fitting. Bias-variance tradeoff.
Regularized regression. Linear regression with sparsity-inducing penalties. Ridge regression. The Lasso. The Elastic-Net. Applications (e.g., in image analysis and in computational finance).
Logistic regression. Sparse logistic regression and applications (e.g., to text categorization).
Algorithms for large-scale regularized regression:
First-order methods.
Proximal methods.
The Fast Iterative Shrinkage-Thresholding Algorithm (FISTA).
Coordinate descent and block-coordinate descent methods.
Classifiers. Neural Networks. Training and the back-propagation algorithm.
Maximum margin classifiers. Dual representation.
Kernel methods and the Support Vector Machine (SVM).
Clustering. K-means clustering. Gaussian mixtures and the Expectation-Minimization (EM) algorithm.
Singular value decomposition and the Principal Component Analysis (PCA). Interpretability and the Sparse-PCA. Fast algorithms for Sparse-PCA.

Introduction to Machine Learning. Supervised and unsupervised learning. Parametric and nonparametric models. Classical examples in pattern analysis (e.g., hand-writing recognition). A brief historical perspective.
Review of probability theory and statistics. Marginal and conditional distributions. Bayes theorem. Prior, likelihood, posterior. Bayesian inference.
Regression problems. Over-fitting. Bias-variance tradeoff.
Regularized regression. Linear regression with sparsity-inducing penalties. Ridge regression. The Lasso. The Elastic-Net. Applications (e.g., in image analysis and in computational finance).
Logistic regression. Sparse logistic regression and applications (e.g., to text categorization).
Algorithms for large-scale regularized regression:
First-order methods.
Proximal methods.
The Fast Iterative Shrinkage-Thresholding Algorithm (FISTA).
Coordinate descent and block-coordinate descent methods.
Classifiers. Neural Networks. Training and the back-propagation algorithm.
Maximum margin classifiers. Dual representation.
Kernel methods and the Support Vector Machine (SVM).
Clustering. K-means clustering. Gaussian mixtures and the Expectation-Minimization (EM) algorithm.
Singular value decomposition and the Principal Component Analysis (PCA). Interpretability and the Sparse-PCA. Fast algorithms for Sparse-PCA.

The course is organized in a series of lectures (about 1/3 of the course) and computer lab exercises and practice sessions (about 2/3 of the course).

The course is organized in a series of lectures (about 1/3 of the course) and computer lab exercises and practice sessions (about 2/3 of the course).

Course slides, handouts, and lab practice sheets will be made available to the students via the PoliTo Web portal. Useful reference textbooks are the following ones:
C.M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
J. Friedman, T. Hastie and R. Tibshirani, The Elements of Statistical Learning, Springer, 2009.
G.C. Calafiore and L. El Ghaoui, Optimization Models, Cambridge Univ. Press, 2014.

Course slides, handouts, and lab practice sheets will be made available to the students via the PoliTo Web portal. Useful reference textbooks are the following ones:
C.M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
J. Friedman, T. Hastie and R. Tibshirani, The Elements of Statistical Learning, Springer, 2009.
G.C. Calafiore and L. El Ghaoui, Optimization Models, Cambridge Univ. Press, 2014.

The final exam consists in a written test, which will contain a mixture of methodological questions and numerical exercises (to be executed with pen and paper; use of a calculator is allowed. If some of the questions explicitly require the use of Matlab, then use of Matlab is allowed via vLAIB) and which will take the form of multiple-choice questionar. Use of didactic material (books, notes, etc.) is NOT allowed. The exam will contain around 11 questions and it must be completed in 1 hour.

The final exam consists in a written test, which will contain a mixture of methodological questions and numerical exercises (to be executed with pen and paper; use of a calculator is allowed. If some of the questions explicitly require the use of Matlab, then use of Matlab is allowed via vLAIB) and which will take the form of multiple-choice questionar. Use of didactic material (books, notes, etc.) is NOT allowed. The exam will contain around 11 questions and it must be completed in 1 hour.

The final exam consists in a written test, which will contain a mixture of methodological questions and numerical exercises (to be executed with pen and paper; use of a calculator is allowed. If some of the questions explicitly require the use of Matlab, then use of Matlab is allowed via vLAIB) and which will take the form of multiple-choice questionar. Use of didactic material (books, notes, etc.) is NOT allowed. The exam will contain around 11 questions and it must be completed in 1 hour.

© Politecnico di Torino

Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY

Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY