Muhammad Shafique
Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CPS) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust.
The basic problems of sharp increase in power densities in on-chip systems according to the discontinuation of Dennard Scaling force to rethink. As the process technology shrinks and the per-transistor performance/power efficiency is not keeping pace with the well-known power-reduction techniques (like DVFS and power-gating) at various abstraction layers, continuing to support precise computing across the stack is most likely not sufficient to solve the rising energy-efficiency challenges. Approximate Computing (aka InExact Computing) relies on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power/energy, and performance efficiency of systems by orders of magnitude at the cost of reduced output quality.
This talk will provide an introduction to the emerging trend of approximate computing followed by our cross-layer approximate computing framework that covers various abstraction layers of the hardware/software stacks, i.e. ranging from the circuit layer to all the way up to the application layer. This talk provides a systematical understanding of how to generate and explore the design space of approximate components (adders and multipliers) and accelerators, as well as our corresponding open-source libraries, which enable a wide-range of power/energy, performance, area and output quality tradeoffs, and a high degree of design flexibility to facilitate their design. Towards the end, this talk will discuss challenges and opportunities for building energy-efficient and adaptive architectures and hardware accelerators for machine learning, and how approximate computing can play an important role.
The basic problems of sharp increase in power densities in on-chip systems according to the discontinuation of Dennard Scaling force to rethink. As the process technology shrinks and the per-transistor performance/power efficiency is not keeping pace with the well-known power-reduction techniques (like DVFS and power-gating) at various abstraction layers, continuing to support precise computing across the stack is most likely not sufficient to solve the rising energy-efficiency challenges. Approximate Computing (aka InExact Computing) relies on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power/energy, and performance efficiency of systems by orders of magnitude at the cost of reduced output quality.
This talk will provide an introduction to the emerging trend of approximate computing followed by our cross-layer approximate computing framework that covers various abstraction layers of the hardware/software stacks, i.e. ranging from the circuit layer to all the way up to the application layer. This talk provides a systematical understanding of how to generate and explore the design space of approximate components (adders and multipliers) and accelerators, as well as our corresponding open-source libraries, which enable a wide-range of power/energy, performance, area and output quality tradeoffs, and a high degree of design flexibility to facilitate their design. Towards the end, this talk will discuss challenges and opportunities for building energy-efficient and adaptive architectures and hardware accelerators for machine learning, and how approximate computing can play an important role.