Self-supervised Learning for Wearable-based Activity Recognition
Thesis in external company Thesis abroad
keywords ARTIFICIAL INTELLIGENCE, AUTOENCODERS, BIOMETRIC IDENTIFICATION, BIOSIGNAL ANALYSIS, CONVOLUTIONAL NEURAL NETWORKS, DEEP LEARNING, DEEP NEURAL NETWORKS, EMBEDDED SYSTEMS, ENERGY EFFICIENCY, LOW POWER, MICROCONTROLLERS, PPG, SELF-SUPERVISED LEARNING, WEARABLE AND IOT DEVICES, WEARABLE COMPUTING, WEARABLE DEVICES
Reference persons DANIELE JAHIER PAGLIARI
External reference persons Kasnesis Panagiotis (University of West Attica)
Alessio Burrello (Politecnico di Torino)
Thesis type EXPERIMENTAL, SOFTWARE DEVELOPMENT
Description Wearable human activity recognition (HAR) can be used to enhance well being and health status, facilitate smart environments and improve physical security in public spaces. In contrast to other HAR methods relying on sensors that suffer from privacy concerns (e.g., camera), wearable activity monitoring is unobtrusive. Moreover, similarly to computer vision, natural language processing, and speech recognition, wearable HAR has not remained unaffected by the rise of deep learning (DL). DL algorithms such as Convolutional Neural Networks (ConvNets or CNNs), have been proven to be capable of automatically extracting features from almost raw motion signals or even fuse them with photoplethysmography (PPG) signals; these high-level features are fed, afterwards, to fully connected (FC) layers or Recurrent Neural Networks (RNNs) enhanced with the Long Short-Term Memory (LSTM) mechanism, to fuse the multimodal features and classify incoming sensor channels into an activity.
Moreover, HAR DL algorithms outperform the performance of the standard machine learning classifiers, such as Support Vector Machines (SVM), which are fed with hand-crafted features of time and frequency domain. Nevertheless, DL algorithms have a huge drawback; they require huge volumes of labeled data in order to be efficiently trained, while motion signal annotation is a labor-intensive and time-consuming procedure.
Self-supervision is an antidote to this issue. Self-supervised learning has been proven to enable AI systems to recognize and understand generalizable patterns in the field of natural language processing (NLP), computer vision and speech recognition. Nevertheless, there are few works on wearable-based activity/gesture recognition.
The objective of this thesis is to build a large unlabeled dataset with the use of wearable sensors (accelerometer, PPG) and pretrain a Deep Learning algorithm is a self-supervised manner to enable the knowledge transfer to a rather small dataset for wearable-based activity recognition.
Interested candidates must send an email to firstname.lastname@example.org attaching their CV and exams' transcript with scores.
Required skills Required skills include Python programming, and a familiarity with machine/deep learning concepts and the corresponding models.
Notes Thesis in collaboration with the company "ThinGenious", University of West Attica, University of Bologna and ETH Zurich. The thesis can be carried out either in Torino or in Athens, Greece.
Deadline 26/09/2024 PROPONI LA TUA CANDIDATURA