Are LLMs Any Good for High-Level Synthesis?
Abstract
The increasing complexity and demand for faster, energy-efficient hardware designs necessitate innovative High-Level Synthesis (HLS) methodologies. This paper explores the potential of Large Language Models (LLMs) to streamline or replace the HLS process, leveraging their ability to understand natural language specifications and refactor code. We survey the current research and conduct experiments comparing Verilog designs generated by a standard HLS tool (Vitis HLS) with those produced by LLMs translating C code or natural language specifications. Our evaluation focuses on quantifying the impact on performance, power, and resource utilization, providing an assessment of the efficiency of LLM-based approaches. This study aims to illuminate the role of LLMs in HLS, identifying promising directions for optimized hardware design in applications such as AI acceleration, embedded systems, and high-performance computing.
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2024
- DOI:
- 10.48550/arXiv.2408.10428
- arXiv:
- arXiv:2408.10428
- Bibcode:
- 2024arXiv240810428L
- Keywords:
-
- Computer Science - Hardware Architecture;
- Computer Science - Artificial Intelligence
- E-Print:
- ICCAD '24 Special Session on AI4HLS: New Frontiers in High-Level Synthesis Augmented with Artificial Intelligence