PORTALE DELLA DIDATTICA

Ricerca CERCA
  KEYWORD

Approximate computing for high performance cyber physical systems

keywords DIGITAL SYSTEM DESIGN TEST AND VERIFICATION

Reference persons STEFANO DI CARLO

External reference persons Alessandro Savino (alessandro.savino@polito.it), Alberto BOSIO

Research Groups TESTGROUP - TESTGROUP

Thesis type RESEARCH

Description Energy efficiency is definitely one of the major driving forces of current computer industry, which is relevant for supercomputers on the one hand, as well as small portable personal electronics and sensors on the other hand. A good picture of the current situation is given by the fact that the electricity consumption of just the European data centers is expected to increase from 56 billion kWh in 2007 up to 104 billion kWh in 2020. A similar situation appears in US, where US data centers electricity consumption is expected to increase from 61 billion kWh in 2006 and 91 billion kWh in 2013 to 140 billion kWh in 2020. Moreover, the power wall is a major hurdle in putting existing solutions into practice. For example, artificial intelligence is ready to provide solutions in many domains, but the resource and power demands of the underlying algorithms are much too high for the target applications. This big gap of the efficiency and performance needs requires completely new computing paradigms, and Approximate Computing (ApCo) appears to be a promising solution.

Approximate computing is a computation which returns a possibly inaccurate result rather than a guaranteed accurate result, for a situation where an approximate result is sufficient for a purpose. One example of such situation is for a search engine where no exact answer may exist for a certain search query and hence, many answers may be acceptable. Similarly, occasional dropping of some frames in a video application can go undetected due to perceptual limitations of humans.

Approximate computing is based on the observation that in many scenarios, although performing exact computation requires large amount of resources, allowing bounded approximation can provide disproportionate gains in performance and energy, while still achieving acceptable result accuracy.

This thesis is part of a big project that aims at the development of a framework able to quickly and accurately determine how an error is propagated in an application in order to compute its impact on the result in a formal way by using different metrics defined in previous studies. The error propagation analysis has to be computed in the two senses: from lower layer up to final application (error propagation) and vice versa from the final application down to lower layer (error back trace). Please note that this kind of framework is completely new for the approximate computing. It requires the definition of models to take in account the different system layers and how these models interact in order to be able to analyze the error propagation;

The analysis of the software to identify the critical portions is usually done by fault injection tools that are time consuming and are unfeasible when working with high performance applications. In this direction, the goal is to reduce as much as possible the usage of fault injection. For example, a static analysis of the software, guided by dedicated metrics and the use of machine learning algorithms and formal execution methods will help in identifying the non-critical portion of software to avoid the application of fault injection.

Required skills C/C++/Python programmin

Notes The thesis is developed in collaboration with LIRMM University of Montpellier, with the possibility of developing part of the thesis in France with a cost contribution of about 500€/month


Deadline 27/04/2018      PROPONI LA TUA CANDIDATURA




© Politecnico di Torino
Corso Duca degli Abruzzi, 24 - 10129 Torino, ITALY
Contatti