--- title: TuRTLe Leaderboard emoji: 🐢 colorFrom: gray colorTo: green sdk: gradio app_file: app.py pinned: true license: apache-2.0 short_description: A Unified Evaluation of LLMs for RTL Generation. sdk_version: 5.19.0 --- ## Quick Introduction ### Prerequisites - **Python 3.11** or higher (required by the project) - **[uv](https://docs.astral.sh/uv/getting-started/installation/)** for managing dependencies #### Installing uv On macOS and Linux: ```bash curl -LsSf https://astral.sh/uv/install.sh | sh ``` Install dependencies: ```bash uv sync ``` ### Deploy Locally ``` $ uv run app.py * Running on local URL: http://127.0.0.1:7860 * To create a public link, set `share=True` in `launch()`. ``` Then on localhost http://127.0.0.1:7860 you should have the leaderboard running ### Add new models If you are from outside of HPAI you must directly modify the `results/results_icarus.json` and `results/results_verilator.json` files. If you are from HPAI, you can add your model onto our shared `.csv` file of results and follow these steps: 1. Modify the `results/parse.py` file, `model_details` variable to include a new entry for your model For example, if we wish to include the classic GPT2 model, we would add the following metadata: ```python model_details = { ... "GPT2": ( # model name "https://huggingface.co/openai-community/gpt2", # model url 0.13, # params (in B) "Coding", # model type: `General`, `Coding`, `RTL-Specific` "V1", # release of the TuRTLe Leaderboard ), } ``` 2. Parse the CSV files onto JSON, which is what the Leaderboard will take as ground truth ``` $ uv run results/parse.py results/results_v3_mlcad_icarus.csv # will generate results/results_v3_mlcad_icarus.json $ uv run results/parse.py results/results_v3_mlcad_verilator.csv # will generate results/results_v3_mlcad_verilator.json ``` The application is hardcoded to look for `results_icarus.json` and `results_verilator.json.` Rename the files you just created: ``` $ mv results/results_v3_mlcad_icarus.json results/results_icarus.json $ mv results/results_v3_mlcad_verilator.json results/results_verilator.json ``` 3. Compute the aggregated scores This will generate the corresponding `aggregated_scores` files that the leaderboard uses for some of its views. ``` $ uv run results/compute_agg_results.py results/results_v3_mlcad_icarus.csv $ uv run results/compute_agg_results.py results/results_v3_mlcad_verilator.csv ``` This will create `aggregated_scores_v3_mlcad_icarus.csv` and `aggregated_scores_v3_mlcad_verilator.csv`. Rename them to what the application expects: ``` $ mv results/aggregated_scores_v3_mlcad_icarus.csv results/aggregated_scores_icarus.csv $ mv results/aggregated_scores_v3_mlcad_verilator.csv results/aggregated_scores_verilator.csv ``` ## License This project is licensed under the Apache License 2.0. See the [LICENSE](./LICENSE) and [NOTICE](./NOTICE) files for more details.