Skip to main content

Papers

Discover Innovation: Explore the Latest Scientific
​Breakthroughs from PrimisAI Minds

Our Papers

EDA-Aware RTL Generation with Large Language Models

Mubashir ul Islam¹, Humza Sami¹, Pierre-Emmanuel Gaillardon¹˒², and Valerio Tenace¹

¹PrimisAI, Los Gatos, CA, USA

²University of Utah, Salt Lake City, UT, USA

Abstract:

Large Language Models (LLMs) have become increasingly popular for generating RTL code. However, producing error-free RTL code in a zero-shot setting remains highly challenging for even state-of-the-art LLMs, often leading to issues that require manual, iterative refinement. This additional debugging process can dramatically increase the verification workload, underscoring the need for robust, automated correction mechanisms to ensure code correctness from the start.

In this work, we introduce AIvril2, a self-verifying, LLM-agnostic agentic framework aimed at enhancing RTL code generation through iterative corrections of both syntax and functional errors. Our approach leverages a collaborative multi-agent system that incorporates feedback from error logs generated by EDA tools to automatically identify and resolve design flaws.

AIVRIL: AI-DRIVEN RTL GENERATION WITH VERIFICATION IN-THE-LOOP

Mubashir ul Islam¹, Humza Sami¹, Pierre-Emmanuel Gaillardon¹˒², and Valerio Tenace¹

¹PrimisAI, Los Gatos, CA, USA

²University of Utah, Salt Lake City, UT, USA

Abstract:

Large Language Models (LLMs) are computational models capable of performing complex natural language processing tasks. Leveraging these capabilities, LLMs hold the potential to transform the entire hardware design stack, with predictions suggesting that front-end and back-end tasks could be fully automated in the near future. Currently, LLMs show great promise in streamlining Register Transfer Level (RTL) generation, enhancing efficiency, and accelerating innovation. However, their probabilistic nature makes them prone to inaccuracies—a significant drawback in RTL design, where reliability and precision are essential.

To address these challenges, this paper introduces AIVRIL, an advanced framework designed to enhance the accuracy and reliability of RTL-aware LLMs. AIVRIL employs a multi-agent, LLM- agnostic system for automatic syntax correction and functional verification, significantly reduc- ing—and in many cases, completely eliminating—instances of erroneous code generation. Experi- mental results conducted on the VerilogEval-Human dataset show that our framework improves code quality by nearly 2× when compared to previous works, while achieving an 88.46% success rate in meeting verification objectives. This represents a critical step toward automating and optimizing hardware design workflows, offering a more dependable methodology for AI-driven RTL design.