Self-supervised Learning for Wearable-based Activity Recognition
Tesi esterna in azienda Tesi all'estero
Parole chiave APPRENDIMENTO PROFONDO, BASSO CONSUMO, BIOSEGNALI, EFFICIENZA ENERGETICA, INTELLIGENZA ARTIFICIALE, MICROCONTROLLORI, PPG, RETI NEURALI CONVOLUZIONALI, RETI NEURALI PROFONDE, RICONOSCIMENTO BIOMETRICO, SISTEMI EMBEDDED, SISTEMI INDOSSABILI
Riferimenti DANIELE JAHIER PAGLIARI
Riferimenti esterni Kasnesis Panagiotis (University of West Attica)
Alessio Burrello (Politecnico di Torino)
Gruppi di ricerca DAUIN - GR-06 - ELECTRONIC DESIGN AUTOMATION - EDA, ELECTRONIC DESIGN AUTOMATION - EDA, GR-06 - ELECTRONIC DESIGN AUTOMATION - EDA
Tipo tesi SPERIMENTALE, SVILUPPO SW
Descrizione Wearable human activity recognition (HAR) can be used to enhance well being and health status, facilitate smart environments and improve physical security in public spaces. In contrast to other HAR methods relying on sensors that suffer from privacy concerns (e.g., camera), wearable activity monitoring is unobtrusive. Moreover, similarly to computer vision, natural language processing, and speech recognition, wearable HAR has not remained unaffected by the rise of deep learning (DL). DL algorithms such as Convolutional Neural Networks (ConvNets or CNNs), have been proven to be capable of automatically extracting features from almost raw motion signals or even fuse them with photoplethysmography (PPG) signals; these high-level features are fed, afterwards, to fully connected (FC) layers or Recurrent Neural Networks (RNNs) enhanced with the Long Short-Term Memory (LSTM) mechanism, to fuse the multimodal features and classify incoming sensor channels into an activity.
Moreover, HAR DL algorithms outperform the performance of the standard machine learning classifiers, such as Support Vector Machines (SVM), which are fed with hand-crafted features of time and frequency domain. Nevertheless, DL algorithms have a huge drawback; they require huge volumes of labeled data in order to be efficiently trained, while motion signal annotation is a labor-intensive and time-consuming procedure.
Self-supervision is an antidote to this issue. Self-supervised learning has been proven to enable AI systems to recognize and understand generalizable patterns in the field of natural language processing (NLP), computer vision and speech recognition. Nevertheless, there are few works on wearable-based activity/gesture recognition.
The objective of this thesis is to build a large unlabeled dataset with the use of wearable sensors (accelerometer, PPG) and pretrain a Deep Learning algorithm is a self-supervised manner to enable the knowledge transfer to a rather small dataset for wearable-based activity recognition.
Interested candidates must send an email to email@example.com attaching their CV and exams' transcript with scores.
Conoscenze richieste Required skills include Python programming, and a familiarity with machine/deep learning concepts and the corresponding models.
Note Thesis in collaboration with the company "ThinGenious", University of West Attica, University of Bologna and ETH Zurich. The thesis can be carried out either in Torino or in Athens, Greece.
Scadenza validita proposta 26/09/2023 PROPONI LA TUA CANDIDATURA