PORTALE DELLA DIDATTICA

Ricerca CERCA
  KEYWORD

Development of a tool for the optimization of Spiking Neural Networks (SNN) for the development of dedicated hardware accelerators on FPGA platform to apply in IoT and edge-computing environments.

keywords ARTIFICIAL INTELLIGENCE, EDGE COMPUTING, FPGA ACCELERATION, HARDWARE ACCELERATORS, NEURAL NETWORKS, NEUROMORPHIC ACCELERATORS, NEUROMORPHIC COMPUTING, OPTIMIZATION, SPIKING NEURAL NETWORKS

Reference persons STEFANO DI CARLO, ALESSANDRO SAVINO

External reference persons CARPEGNA ALESSIO

Research Groups DAUIN - GR-24 - SMILIES - reSilient coMputer archItectures and LIfE Sci

Thesis type APPLIED RESEARCH, EXPERIMENTAL RESEARCH

Description The cloud-computing paradigm, in which a high-performance central elaboration system is used to process data that are collected by small spread devices, has a lot of drawbacks. The high-power consumption, the unpredictable communication latency, the privacy violations given by the transmission of delicate data to a remote machine and so on.

A possible solution consists in moving the data processing, or part of it, directly onto the spread devices. This is called edge computing, to indicate the relocation of the elaboration towards the edge of the system, where the center is represented by the high-performance servers.

The main challenge with this kind of solution is given by the much smaller resources available on edge devices, generally microcontrollers and mobile devices. The design of dedicated hardware accelerators, or co-processors, able to perform a specific task in a very optimized way and to unburden the CPU or MCU of some computational load, can help in such a situation.

Neural networks represent a perfect field of application for this kind of accelerators. They are generally highly parallelizable complex computational models, not very suitable for the execution on CPUs or MCUs, where the parallelism is limited to few cores. Therefore, a hardware acceleration can make the difference in terms of performance and power consumption.

SNN are a particular kind of neural networks, directly inspired by a biological brain, where the neurons exchange information in form of short current spikes. With an appropriate design they can be particularly suitable to be implemented onto dedicate hardware circuits, with lower power consumption and area occupation with respect to other neural networks.

The goal of the thesis is to automate the design of a generic Spiking Neural Network to implement on a dedicated FPGA (Field Programmable Gate Array) hardware circuit. The work will require the exploration of different techniques to optimize the execution of the network onto a hardware accelerator. For example, quantization of the network’s weights, quantization of the neurons’ internal parameters, use of approximate arithmetic operators, pruning etc., trying the different techniques both after and before training. The goal is, given a SNN model and a list of constraints related to the target hardware platform, to find the optimal configuration to minimize power, area, and execution time, following the user requests, with the minimum impact on the performance of the network, for example on the accuracy. The expected result is a list of configurations to use for the subsequent development of a hardware accelerator.

Required skills Hardware design techniques, python programming, optimization techniques on neural networks.

Possessing these skills is not mandatory and they can be learnt during the thesis work.


Deadline 29/02/2024      PROPONI LA TUA CANDIDATURA




© Politecnico di Torino
Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY
Contatti