RTLLM-v1.1 / README.md
ggcristian's picture
Update README.md
82d3625 verified
metadata
dataset_info:
  features:
    - name: problem_id
      dtype: string
    - name: folder_path
      dtype: string
  splits:
    - name: train
      num_bytes: 1389
      num_examples: 28
  download_size: 2063
  dataset_size: 1389
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Removed samples
We have removed the samples asyn_fifo and multi_pipe_4bit due to incompatibilities with Verilator Take this into account! If you want the full dataset of 50 samples, go to the original repo.

Disclaimer: I am not the original author and uploaded this here only for convenience, it only contains the basename of each folder as well as the relative paths to avoid overhead. Please refer to the original repo for any information. https://github.com/hkust-zhiyao/RTLLM

Notice that the risc_cpu experiment is missing as it was not provided by the authors.

@inproceedings{lu2024rtllm,
  author={Lu, Yao and Liu, Shang and Zhang, Qijun and Xie, Zhiyao},
  booktitle={2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)}, 
  title={RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model}, 
  year={2024},
  pages={722-727},
  organization={IEEE}
  }

@inproceedings{liu2024openllm,
  title={OpenLLM-RTL: Open Dataset and Benchmark for LLM-Aided Design RTL Generation(Invited)},
  author={Liu, Shang and Lu, Yao and Fang, Wenji and Li, Mengming and Xie, Zhiyao},
  booktitle={Proceedings of 2024 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)},
  year={2024},
  organization={ACM}
}

This dataset was uploaded to support reproducibility of the benchmarks implemented in TuRTLe, a framework to assess LLMs across key RTL generation tasks.