https://huggingface.co/nicoboss/DeepSeek-R1-Distill-Llama-70B-Uncensored

#580
by nicoboss - opened

A finetune of DeepSeek-R1-Distill-Llama-70B to make it uncensored.

Because I finetuned the model on StormPeak I thought it might be a good idea to already put the source GGUF to /tmp/DeepSeek-R1-Distill-Llama-70B-Uncensored.gguf and /tmp/quant/DeepSeek-R1-Distill-Llama-70B-Uncensored.gguf so you don't need to GGUF conveart and transfer to nico1 to compute the imatrix. Maybe the model needs to be queued to nico1 for this to work.

yes, it needs to be queued to nico1 for the quant to be used and cleaned up.

for imatrix, it would always work to just put it into /tmp, and (I think) it would be rsync'ed over it, saving bandwidth.

for quant and imatrix jobs, putting it into /tmp/quant is enough, as transfering it from nico1 to nico1 is optimized into hardlinking it, i think.

mradermacher changed discussion status to closed

Sign up or log in to comment