Neural networks resilient to memory faults
keywords DEEP LEARNING, DEEP LEARNING, VIDEO ANALYSIS, DEEP NEURAL NETWORKS, MACHINE LEARNING, MACHINE LEARNING, ARTIFICIAL NEURAL NETWORKS
Reference persons ENRICO MAGLI
Research Groups CCNE - COMMUNICATIONS AND COMPUTER NETWORKS ENGINEERING, ICT4SS - ICT FOR SMART SOCIETIES, Image Processing Lab (IPL)
Thesis type RESEARCH
Description Deep neural networks (DNNs) are state-of-the-art algorithms for numerous applications such as image classification. They provide high accuracy which might however be reached at the expense of very high computational complexity and memory requirements. Recent research in the field of Quantized neural networks (QNNs) has offered solutions to reduce the computational burden while preserving excellent accuracy levels. However, an open problem remains, i.e. investigating the robustness of QNNs in safety-critical applications that require failure-free behavior even when hardware faults happen. The candidate will tackle the problem of developing solutions for guaranteeing robustness such as novel loss functions that exploit the inherent deep features, activation and weight perturbation during training, and more.
Required skills Candidate students should have some background on neural networks. Some experience of TensorFlow environment and Python programming are desirable, along with good programming skills.
Deadline 13/05/2023 PROPONI LA TUA CANDIDATURA