PORTALE DELLA DIDATTICA

Ricerca CERCA
  KEYWORD

Ensemble learning and optimization techniques for deep neural networks

keywords DEEP LEARNING, OPTIMIZATION ALGORITHMS, STATISTICAL PHYSICS

Reference persons LIA MORRA

Research Groups DAUIN - GR-09 - GRAphics and INtelligent Systems - GRAINS

Thesis type RESEARCH THESIS

Description When training deep networks, often different network parameter training protocols produce network instances with more or less different generalization properties. Recent studies show how through statistical techniques it is possible to derive, from an ensemble of networks with different generalization characteristics, an effective network with better generalization characteristics than those of the starting ensemble. A second aspect connected with ensemble learning is to use a new learning technique that considers an ensemble of networks bound to have a certain relative distance (in the parameter space of the network) called in literature Entropic Stochastic Gradient Descent [ESGD] (https://arxiv.org/abs/2006.07897). Preliminary studies show how this learning technique allows the network ensemble to avoid local minima and to find optimal regions of the solution space in terms of robustness and generalization capacity. The aim of this thesis is to apply these techniques to challenging deep neural networks in the medical domain, specifically on high resolution digital mammography, analyzing their properties and scalability.
The activities will focus on: i) “static” statistical analysis of network ensembles: stacking, weighted ensemble average, ii) ESGD analysis; and iii)estimation of the uncertainty of complex deep neural networks (multi-stream)

Required skills programming skills; experience with PyTorch or other deep learning framework); good analytical and mathematical skills.


Deadline 16/10/2022      PROPONI LA TUA CANDIDATURA




© Politecnico di Torino
Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY
Contatti