KEYWORD |
Deep neural networks resilient to memory faults
Parole chiave DEEP LEARNING, DEEP NEURAL NETWORKS, MACHINE LEARNING, MACHINE LEARNING, ARTIFICIAL NEURAL NETWORKS
Riferimenti ENRICO MAGLI
Gruppi di ricerca CCNE - COMMUNICATIONS AND COMPUTER NETWORKS ENGINEERING, ICT4SS - ICT FOR SMART SOCIETIES, Image Processing Lab (IPL)
Tipo tesi RESEARCH
Descrizione Deep neural networks (DNNs) are state-of-the-art algorithms for numerous applications such as image classification. They provide high accuracy which might however be reached at the expense of very high computational complexity and memory requirements. Recent research in the field of Quantized neural networks (QNNs) has offered solutions to reduce the computational burden while preserving excellent accuracy levels. However, an open problem remains, i.e. investigating the robustness of QNNs in safety-critical applications that require failure-free behavior even when hardware faults happen. The candidate will tackle the problem of developing solutions for guaranteeing robustness such as novel loss functions that exploit the inherent deep features, activation and weight perturbation during training, and more.
Conoscenze richieste Candidate students should have some background on neural networks. Some experience of TensorFlow environment and Python programming are desirable, along with good programming skills.
Scadenza validita proposta 13/05/2023
PROPONI LA TUA CANDIDATURA