Design of a hardware module for online learning on Spiking Neural Networks, with partial reconfiguration on FPGA.
keywords ARTIFICIAL INTELLIGENCE, EDGE COMPUTING, FPGA ACCELERATION, HARDWARE ACCELERATORS, NEURAL NETWORKS, NEUROMORPHIC ACCELERATORS, NEUROMORPHIC COMPUTING, ONLINE LEARNING, PARTIAL RECONFIGURATION, SPIKING NEURAL NETWORKS
External reference persons Alessio Carpegna
Thesis type APPLIED RESEARCH, EXPERIMENTAL, RESEARCH
Description Spiking Neural Networks (SNN) are a particular kind of neural networks, directly inspired by a biological brain, where the neurons exchange information in form of short current spikes. With an appropriate design they can be particularly suitable to be implemented onto dedicate hardware circuits, with lower power consumption and area occupation with respect to other neural networks.
One of the advantages of SNN is the capability of learning while they are used, in a way similar to the one observed in biological brains. Such an approach, called online learning, is very different from the standard one, in which the network is trained offline and then used to perform inference on the field.
The advantages are many: first, the training is performed directly on the hardware circuit, optimized for the execution of the network, and so much faster than general purpose hardware. Secondly SNN can be used in continuous learning applications, where the network evolves, learns, and adapts itself to external stimuli during its operation. Such an approach would allow to avoid the necessity to collect and manually label the huge amount of data required for the training of a machine learning model, giving the possibility to the network to learn in an unsupervised way through direct experience.
The goal of the thesis is to design a hardware module for online learning, targeting an already existing accelerator for SNN. First, the work will consist in finding training method for SNN suitable with a hardware implementation (e.g. Spike Timing Dependent Plasticity); the student will be then required to design the module to implement such method and to flank it to the accelerator. Finally, one of the goals is to exploit the programmability of the FPGA to perform partial reconfiguration, removing the module if it is not necessary, to free useful space. In other words, if the application does not require continuous learning, after training the network on the accelerator, this can be unloaded, removing the training part.
Generalization of the learning method and development of an automatic hardware configuration framework.
The available methods that are suitable for the described application are many, generally unsupervised and inspired by biology. Once the module is completed it would be interesting to repeat the process with different types of training, compare them, and to let the user choose the preferred one through a software interface, for example developed in python.
Required skills Hardware design techniques, VHDL.
Deadline 08/11/2023 PROPONI LA TUA CANDIDATURA