Scalability in Machine Learning and Deep Learning Models: An In-Depth Analysis through Distributed Architectures
Research Groups DAUIN - GR-04 - DATABASE AND DATA MINING GROUP - DBDMG
Description This thesis proposal will explore the fundamental concept of scalability in the context of Machine Learning and Deep Learning. The focus will be on the analysis and optimisation of distributed and reconfigurable computing architectures such as FPGAs and GPUs, intended to handle large volumes of data and the related computational effort. This research is indeed driven by the growing demand to develop machine learning systems capable of dynamically adapting to the increasing complexity of data and models. The candidate will therefore investigate and evaluate how performance can be affected by the specific topology of hardware nodes within the distributed architecture, identifying the most suitable configurations, improving the efficiency and parallelising the model training.
Deadline 01/12/2024 PROPONI LA TUA CANDIDATURA