PORTALE DELLA DIDATTICA

Ricerca CERCA
  KEYWORD

Optimization of Deep Neural Networks through Innovative Neural Architecture Search Algorithms

keywords ARTIFICIAL INTELLIGENCE, ARTIFICIAL NEURAL NETWORKS, C, DEEP LEARNING, DEEP NEURAL NETWORKS, EMBEDDED SYSTEMS, ENERGY EFFICIENCY, FIRMWARE, LINEAR ALGEBRA, LOW POWER, MACHINE LEARNING, MICROCONTROLLERS, NEURAL NETWORKS, SOFTWARE, SOFTWARE ACCELERATION, TRANSFORMERS

Reference persons DANIELE JAHIER PAGLIARI

External reference persons Alessio Burrello (Politecnico di Torino)

Research Groups DAUIN - GR-06 - ELECTRONIC DESIGN AUTOMATION - EDA, ELECTRONIC DESIGN AUTOMATION - EDA, GR-06 - ELECTRONIC DESIGN AUTOMATION - EDA

Thesis type EXPERIMENTAL, SOFTWARE DEVELOPMENT

Description Nowadays, Deep Learning represents the go-to-approach to solve recognition and prediction problems in a vast spectrum of application domains, including computer vision, time-series analysis, and natural language processing. For many of these tasks, deploying the model at the edge of the IoT provides several benefits with respect to a traditional cloud-centric approach, such as predictable response times and improved privacy. However, executing complex deep neural networks (DNN) on extreme-edge devices, such as low-power microcontrollers, is complicated by their tight constraints in terms of memory and energy consumption. Therefore, bringing "intelligence" at the IoT edge requires efficient architectures, that minimize the latency/energy consumption required for an inference, without sacrificing output quality (e.g., classification accuracy). Finding these architectures manually with "trial-and-error" is tedious and costly.

Therefore, in this thesis, the candidate will explore efficient automatic optimization algorithms able to explore a vast search space of possible neural network architectures, finding the ones that yield the best accuracy versus complexity trade-off. These methods are often referred to as Neural Architecture Search (NAS) tools. In particular, the candidate will focus on key aspects of a practical NAS tool, such as: i) accurately modeling of the target hardware platform's latency and energy consumption, ii) maximizing search efficiency and minimizing search time, iii) combining NAS with other deployment-oriented neural network optimization such as quantization and mixed-precision search.

The developed tool will be general and applicable across a wide spectrum of applications. In particular, the thesis candidate will evaulate the NAS on four tasks that are relevant for edge AI (image classification, visual wake-word, speech recognition and anomaly detection), which constitute the MLPerf Tiny standard benchmark suite. In terms of deployment target, the thesis will consider extreme-edge, ultra-low-power systems, such as RISC-V-based parallel clusters of general purpose cores (e.g. GreenWaves' GAP8 and GAP9).

Interested candidates must send an email to daniele.jahier@polito.it attaching their CV and exams' transcript with scores.

Required skills Required skills include C and Python programming. Further, a basic knowledge of computer architectures and embedded systems is necessary. Required skills also include some familiarity with basic machine/deep learning concepts and corresponding models.

Notes Thesis in collaboration with Prof. Luca Benini’s research group at the University of Bologna and ETH Zurich.


Deadline 31/12/2022      PROPONI LA TUA CANDIDATURA