RichardErkhov commited on
Commit
6f0327a
·
verified ·
1 Parent(s): 2942ff3

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +130 -0
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ CodeV-CL-7B - GGUF
11
+ - Model creator: https://huggingface.co/yang-z/
12
+ - Original model: https://huggingface.co/yang-z/CodeV-CL-7B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [CodeV-CL-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [CodeV-CL-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [CodeV-CL-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [CodeV-CL-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [CodeV-CL-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [CodeV-CL-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q3_K.gguf) | Q3_K | 3.07GB |
23
+ | [CodeV-CL-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [CodeV-CL-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [CodeV-CL-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [CodeV-CL-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [CodeV-CL-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [CodeV-CL-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [CodeV-CL-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [CodeV-CL-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [CodeV-CL-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [CodeV-CL-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [CodeV-CL-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [CodeV-CL-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [CodeV-CL-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [CodeV-CL-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [CodeV-CL-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q6_K.gguf) | Q6_K | 5.15GB |
38
+ | [CodeV-CL-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/yang-z_-_CodeV-CL-7B-gguf/blob/main/CodeV-CL-7B.Q8_0.gguf) | Q8_0 | 6.67GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: llama2
46
+ library_name: transformers
47
+ pipeline_tag: text-generation
48
+ tags:
49
+ - code
50
+ ---
51
+ <div align="center">
52
+ <img src="./assets/logo.png" style="zoom:25%;" />
53
+ </div>
54
+
55
+ # CodeV:Empowering LLMs for Verilog Generation through Multi-Level Summarization
56
+
57
+ <img src="assets/overview.png" style="zoom:50%;" />
58
+
59
+ CodeV is an innovative series of open-source, instruction-tuned Large Language Models (LLMs) specifically designed for the generation of high-quality Verilog code, addressing the challenges faced by existing models in this domain. **(This repo is under development)**
60
+
61
+ ## Models and Datasets
62
+
63
+ | | Base Model | CodeV |
64
+ | ---- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
65
+ | 6.7B | [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | [yang-z/CodeV-DS-6.7B](https://huggingface.co/yang-z/CodeV-DS-6.7B) |
66
+ | 7B | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [yang-z/CodeV-CL-7B](https://huggingface.co/yang-z/CodeV-CL-7B) |
67
+ | 7B | [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) | [yang-z/CodeV-QW-7B](https://huggingface.co/yang-z/CodeV-QW-7B) |
68
+
69
+ ## Test
70
+
71
+ If you want to test the generation capability of existing models on Verilog, you need to install the [VerilogEval](https://github.com/NVlabs/verilog-eval) and [RTLLM](https://github.com/hkust-zhiyao/rtllm) environments.
72
+
73
+ ## Quick Start
74
+
75
+ ```python
76
+ from transformers import pipeline
77
+
78
+ import torch
79
+
80
+
81
+
82
+ prompt= "FILL IN THE QUESTION"
83
+
84
+
85
+
86
+ generator = pipeline(
87
+
88
+ model="CODEV",
89
+
90
+ task="text-generation",
91
+
92
+ torch_dtype=torch.bfloat16,
93
+
94
+ device_map="auto",
95
+
96
+ )
97
+
98
+
99
+
100
+ result = generator(prompt , max_length=2048, num_return_sequences=1, temperature=0.0)
101
+
102
+ response = result[0]["generated_text"]
103
+
104
+ print("Response:", response)
105
+ ```
106
+
107
+ ## Paper
108
+ **Arxiv:** <https://arxiv.org/abs/2407.10424>
109
+
110
+ Please cite the paper if you use the models from CodeV.
111
+
112
+ ```
113
+ @misc{yang-z,
114
+ title={CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization},
115
+ author={Yang Zhao and Di Huang and Chongxiao Li and Pengwei Jin and Ziyuan Nan and Tianyun Ma and Lei Qi and Yansong Pan and Zhenxing Zhang and Rui Zhang and Xishan Zhang and Zidong Du and Qi Guo and Xing Hu and Yunji Chen},
116
+ year={2024},
117
+ eprint={2407.10424},
118
+ archivePrefix={arXiv},
119
+ primaryClass={cs.PL},
120
+ url={https://arxiv.org/abs/2407.10424},
121
+ }
122
+ ```
123
+
124
+ ## Acknowledgements
125
+
126
+ * [Magicoder](https://github.com/ise-uiuc/magicoder): Training code, original datasets and data decontamination
127
+ * [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for CodeV-DeepSeek
128
+ * [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for CodeLlama
129
+ * [CodeQwen](https://github.com/QwenLM/CodeQwen1.5): CodeV-CodeQwen
130
+