KEYWORD |
Simultaneous Training and Hardware Optimization of Deep Neural Networks
Parole chiave FPGA ACCELERATION
Riferimenti MARIO ROBERTO CASU
Gruppi di ricerca VLSILAB (VLSI theory, design and applications)
Tipo tesi DESIGN AND EXPERIMENTS
Descrizione Hardware accelerators for Machine Learning (ML) models, including Deep Neural Networks (DNNs), are essential to drive the future of Artificial Intelligence in embedded devices. Although several works have recently addressed the problem of performance co-optimization for hardware and network training, most of them considered either a fixed network or a given hardware architecture. In this thesis, the student will work on a new framework for joint optimization of network architecture and hardware configurations, which is based on Bayesian Optimization (BO) on top of High Level Synthesis (HLS). The multi-objective nature of this framework allows for the definition of various hardware and network performance goals as well as multiple constraints, and the multi-objective BO allows to easily obtain a set of Pareto points. The student will evaluate the design methodology on a DNN optimized for an FPGA target. The goal is to show that the Pareto set obtained by the proposed joint-optimization approach outperforms other methods based on a separate optimization or random search.
Conoscenze richieste Hardware design
Scadenza validita proposta 09/02/2024
PROPONI LA TUA CANDIDATURA