PORTALE DELLA DIDATTICA

Ricerca CERCA
  KEYWORD

Development of a tool for Design Space Exploration (DSE) for spiking neural networks.

keywords ARTIFICIAL INTELLIGENCE, DESIGN SPACE EXPLORATION, NEURAL ARCHITECTURE SEARCH, NEURAL NETWORKS, NEUROMORPHIC COMPUTING, SPIKING NEURAL NETWORKS

Reference persons STEFANO DI CARLO, ALESSANDRO SAVINO

External reference persons ALESSIO CARPEGNA

Research Groups DAUIN - GR-24 - SMILIES - reSilient coMputer archItectures and LIfE Sci

Thesis type APPLIED RESEARCH, EXPERIMENTAL RESEARCH

Description Spiking Neural Networks (SNN) are a particular kind of Neural Networks that is directly inspired by a biological brain, in which neurons exchange information in form of short current spikes.

The most common models of neural networks, like Deep Neural Networks (DNN) and Convolutional Neural Networks (CNN), are only vaguely inspired by a biological neural network and use mathematical models that are optimized to solve specific problems, generally related to the classification of input data, but quite far from the real behaviour of an animal brain. On the other hand, Spiking Neural Networks are designed starting from the mathematical description of a biological neuron, and trying to describe the temporal evolution of the neuron’s state when stimulated with input spikes. There is a large number of models available in literature, from the most biologically plausible, and so the most complex, to the fastes and most optimized ones.

As said before, SNN work with short spikes, both as an input and as an output. However, data are generally represented in numerical form, unless they are collected with neuromorphic sensors. This implies the need for encoding the input data as spikes, and to decode the output spikes back into numbers. Also in this case there are a lot of different methods available for encoding and decoding.

Finally, the structure of the network itself can be configured in different ways. Supposing to work with a Feed Forward Fully Connected neural network (FF-FC-NN) the number of layers and the number of neurons for each layer can be chosen.

The number of possible choices that one can take during the design of a SNN is huge. The goal of the thesis is to automate the search for the optimal architecture for a specific problem. In other words, to perform a complete Design Space Exploration (DSE) on the network, following constraints provided by the user, that is: trying to optimize the required metric, for example the accuracy, given some constraints, for example the maximum number of neurons.

The work will be incremental:
1. Choice of a reference dataset
2. Choice of a reference neuron model
3. DSE on the network structure, starting with a FFFC architecture
4. Generalization to different datasets
5. DSE on different types of encoding/decoding on the same dataset
6. DSE on different neurons’ models

Extra
Supposing to finish the work in an acceptable time, it would be interesting to study network structures different from the FFFC.

However, the work is already quite demanding and the custom network architecture field is quite new, so it would be a super extra.

Required skills Mandatory: data analysis, neural networks, machine learning.
Opzionali: design space exploration. This can be learnt during the thesis work.


Deadline 25/10/2023      PROPONI LA TUA CANDIDATURA




© Politecnico di Torino
Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY
Contatti