/Student project: Designing Hardware Accelerator for Sparse Transformer Models

Student project: Designing Hardware Accelerator for Sparse Transformer Models

Research & development - Eindhoven | More than two weeks ago

This project aims to design energy-efficient hardware accelerator for sparse Transformer models.

Student project: Designing Hardware Accelerator for Sparse Transformer Models 

What you will do

In recent years, Transformer models have emerged as a powerful and versatile class of neural network architectures, revolutionizing natural language processing, image recognition, and various other tasks. However, the computational demands of these models, especially in the context of large-scale applications, pose significant challenges. Additionally, real-time responses are crucial for applications requiring large-scale models. Yet, they present a substantial challenge, especially when dealing with intricate models that process raw sensory data, such as LiDAR point clouds and Dynamic Vision Sensors (DVS). Sparse Transformer models have demonstrated the potential to address some of these challenges by leveraging sparsity patterns in the attention mechanism.


This master's thesis proposal aims to design a specialized hardware accelerator tailored for Sparse Transformer models, including activation sparsity, weight sparsity, and sparse self-attention, with a focus on optimizing performance, energy efficiency, and scalability. The proposed hardware accelerator will exploit the inherent sparsity present in the attention mechanism to reduce computational requirements and improve overall efficiency.


Specific Goals:

  1. Designing a dataflow architecture with sparsity-aware datapath for Transformers in an event-driven paradigm.
  2. Enhanced scheduling and control flow for dynamic adaptation to fluctuating sparsity patterns.
  3. Memory-management optimizations to effectively utilize on-chip memory.
  4. Extend the existing software stack (SDK) to accommodate and integrate your designed system. Explore and select appropriate improvements.

What we do for you

At IMEC’s Hardware Efficient AI lab at Holst-Center High-Tech Campus Eindhoven, we drive advanced exploration for next-generation neuromorphic accelerators and deep-learning models that target always-on low power Edge-AI applications. Autonomous drones with DVS camera and high-speed industrial vision are some examples of it. We are exploring several new co-optimization (HW+SW) techniques to push the limits of our in-house event-driven multicore neuromorphic processor. We focus on energy efficient learning and inference with high accuracy at the far edge. The work of this project will fall under the scope of a European project ("REBECCA"), and the outputs will be published in deliverables and high-impact journals/conferences (subject to the quality of the work). Imec NL provides the required equipment, access to lab facilities, a workplace in the Holst Centre at High Tech Campus, and a monthly allowance for running/living expenses during the internship. 

Who you are

  • PhD or MSc student in electrical engineering or computer science.
  • Knowledge and interest in deep-learning and micro-architecture.
  • Available for a period of 9-12 months.
  • Experience with HDL (Verilog/VHDL) and scripting languages such as Python.
  • Structured way of reporting, both orally and in writing.
  • Understanding of Neuromorphic design is a plus.
  • Entitled to do an internship in the Netherlands. This means that you are either an EU/EEA citizen, or a non-EU/EEA citizen who is studying at a Dutch university and need to do an internship as part of your studies.

Interested

Does this project sound like an interesting next step in your career at imec? Don’t hesitate to submit your application by clicking on ‘APPLY NOW’.
Got some questions about the recruitment process? Martijn Kohl of the Talent Acquisition Team will be happy to assist you.

 

Who we are
Accept marketing-cookies to view this content.
Cookie settings
imec's cleanroom
Accept marketing-cookies to view this content.
Cookie settings

Send this job to your email