PORTALE DELLA DIDATTICA

Ricerca CERCA
  KEYWORD

DAUIN - GR-24 - SMILIES - reSilient coMputer archItectures and LIfE Sci

Optimization of a hardware accelerator for Spiking Neural Networks for low power applications at the edge.

keywords EDGE COMPUTING, EVENT-DRIVEN, FPGA ACCELERATION, HARDWARE ACCELERATORS, LOW POWER, NEURAL NETWORKS, NEUROMORPHIC ACCELERATORS, NEUROMORPHIC COMPUTING, OPTIMIZATION, SPIKING NEURAL NETWORKS

Reference persons STEFANO DI CARLO, ALESSANDRO SAVINO

External reference persons CARPEGNA ALESSIO

Research Groups DAUIN - GR-24 - SMILIES - reSilient coMputer archItectures and LIfE Sci

Thesis type APPLIED, APPLIED RESEARCH, EXPERIMENTAL RESEARCH, HARDWARE DESIGN

Description The cloud-computing paradigm, in which a high-performance central elaboration system is used to process data that are collected by small spread devices, has a lot of drawbacks. The high-power consumption, the unpredictable communication latency, the privacy violations given by the transmission of delicate data to a remote machine and so on.

A possible solution consists in moving the data processing, or part of it, directly onto the spread devices. This is called edge computing, to indicate the relocation of the elaboration towards the edge of the system, where the center is represented by the high-performance servers.

The main challenge with this kind of solution is given by the much smaller resources available on edge devices, generally microcontrollers and mobile devices. The design of dedicated hardware accelerators, or co-processors, able to perform a specific task in a very optimized way and to unburden the CPU or MCU of some computational load, can help in such a situation.

Neural networks represent a perfect field of application for this kind of accelerators. They are generally highly parallelizable complex computational models, not very suitable for the execution on CPUs or MCUs, where the parallelism is limited to few cores. Therefore, a hardware acceleration can make the difference in terms of performance and power consumption.

SNN are a particular kind of neural networks, directly inspired by a biological brain, where the neurons exchange information in form of short current spikes. With an appropriate design they can be particularly suitable to be implemented onto dedicate hardware circuits, with lower power consumption and area occupation with respect to other neural networks.

The goal of the thesis is to modify a hardware accelerator for Spiking Neural Networks, already existing, to update it following an event-driven approach. This consists in update a specific neuron’s state only when it receives at least an input spike. If no input stimuli are present the neuron does nothing, minimizing the power consumption. On the other hand, the reference accelerator uses a clock-driven update, in which the neuron’s state is updated at every clock cycle, regardless of the input activity. This solution is computationally easier but much more power consuming.

The expected result is a complete comparison between the reference accelerator and the event-driven one, in order to evaluate the hardware overhead given by this last solution, more complex with respect to the clock-driven one, and the power saving, with different architectures and different input datasets.

Extra
If possible, it would be interesting to design an event/clock-driven hybrid solution. Given the higher complexity of the event-driven approach, it is likely that the optimal solution depends on the application and the chosen architecture.

Required skills Hardware design techniques, VHDL.


Deadline 25/10/2023      PROPONI LA TUA CANDIDATURA




© Politecnico di Torino
Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY
Contatti