Corrupt download?

#1
by wise-time - opened

The 8-bit version of the download seems to stop at 31.7GB, I tried from 2 public-wifi sites, to complete. I am on linux-mint 20, using, waterfox & firefox, its all up-to-date. In other words, it wont complete in the last 500MB. I have been trying to get this file for over 5 weeks, and it is a critical download, that I will not be able to attempt again. I have paused and resumed multiple times, it just stops then fails, retrying does exactly the same. I have 13b, 70b wont fit in 8bit, and to note gpt4 said as a rough estimation, that 30b @ 8bit would be better than 70b @ 4bit, for my needs.

deleted

Worked for me. Might consider using wget for large files instead of a browser.

Well gpt4 is pretty wrong here. First put 30b is llama 1 while 70b is llama 2.

Llama2 was trained on much more and higher quality data compared to llama 1.

Secondly a quantized larger model will most likely perform better then a unquantized smaller model. Gptq, awq, exl2, gguf quants also have been higher quality now.

You can look at this chart and it generally shows that a quantized high param model is better then a low param full model.

(Lower is better)
image.png

If you still really want to use platypus model, consider using wget command or something similar like nurb432 said

Oh, ok, I was assuming it was incorrectly named, and was actually llama2 based, in that case, wont be using it anyhow, and thanks for the tips. I think Platypus2 70b 5/6bit, is best for 64GB RAM then. I'm guessing this is because there was no 30b llama2, hugely disappointing, that one for users with 64GB.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment