KEYWORD |
Application of Approximate Computing Techniques in Large Language Models (LLMs)
keywords APPROXIMATE COMPUTING, EDGE COMPUTING, LARGE LANGUAGE MODELS
Reference persons STEFANO DI CARLO, ALESSANDRO SAVINO
Research Groups DAUIN - GR-24 - reSilient coMputer archItectures and LIfE Sci - SMILIES
Thesis type RESEARCH / EXPERIMENTAL
Description This thesis delves into utilizing approximate computing techniques in Large Language Models (LLMs) such as GPT, BERT, and their variations. With the exponential rise in computational requirements and energy consumption linked to these models, approximate computing offers a feasible approach to improve efficiency without significantly compromising performance. The research will identify appropriate approximation methods, integrate them into LLMs, and assess their impact on model accuracy, computational efficiency, and resource utilization.
Motivation: LLMs have propelled natural language processing (NLP) to heights, delivering top-notch performance across diverse tasks. However, their large-scale nature necessitates substantial computational resources and energy, presenting challenges for practical deployment. Approximate computing, which exploits reduced precision and other approximation methods to boost efficiency, provides a promising solution. This thesis will investigate how these techniques can be effectively applied to LLMs to strike a balance between computational efficiency and model accuracy. The ultimate goal is to evaluate the potential for leveraging LLM models in Edge Computing.
Required skills C/C++ (e/o RUST)
Python
Fundamentals of Computer Architectures
Deadline 27/05/2025
PROPONI LA TUA CANDIDATURA