Update README.md
Browse files
README.md
CHANGED
@@ -390,7 +390,7 @@ pip install -r requirements.txt
|
|
390 |
|
391 |
<h3 id="2-2">2.2 Pretraining model weight acquisition and restoration</h3>
|
392 |
|
393 |
-
❗❗❗ Note that in terms of hardware, performing step `2.2`, which involves merging LLaMA-13B with ZhiXI-13B-Diff, requires approximately **100GB** of RAM, with no demand for VRAM (this is due to the memory overhead caused by our merging strategy.
|
394 |
|
395 |
**1. Download LLaMA 13B and ZhiXi-13B-Diff**
|
396 |
|
|
|
390 |
|
391 |
<h3 id="2-2">2.2 Pretraining model weight acquisition and restoration</h3>
|
392 |
|
393 |
+
❗❗❗ Note that in terms of hardware, performing step `2.2`, which involves merging LLaMA-13B with ZhiXI-13B-Diff, requires approximately **100GB** of RAM, with no demand for VRAM (this is due to the memory overhead caused by our merging strategy. For your convenience, we have provided the fp16 weights at this link: https://huggingface.co/zjunlp/zhixi-13b-diff-fp16. **fp16 weights require less memory but may slightly impact performance**. We will improve our merging approach in future updates, and we are currently developing a 7B model as well, so stay tuned). For step `2.4`, which involves inference using `ZhiXi`, a minimum of **26GB** of VRAM is required.
|
394 |
|
395 |
**1. Download LLaMA 13B and ZhiXi-13B-Diff**
|
396 |
|