AI-assisted multimodal inspection tools using deep learning models for non-contact sensing
Reference persons GIAN PAOLO CIMELLARO
Description The existing vision-based inspection techniques rely mostly on RGB cameras due to the immediate availability of low-cost, high resolution cameras. However, the traditional RGB cameras project 3D objects to a 2D space, leading to an information loss due to distance and scale.
This research will exploit non-contact sensing devices such as LiDAR, depth, thermal, or hyperspectral cameras that can provide vital information that traditional RGB cameras cannot capture. Additionally, the recent augmented reality (AR) devices like Microsoft HoloLens equipped with multimodal sensors will be used. The collected data will be used to develop Multi-modal Deep Learning models that are able to combine data from multiple sources for enhanced detection accuracy.
Deadline 17/11/2023 PROPONI LA TUA CANDIDATURA