Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
patrickbdevaney
/
deepseek-r1-qwen-7b-q6-exl2
like
0
Text Generation
Transformers
Safetensors
qwen2
conversational
text-generation-inference
Inference Endpoints
6-bit
exl2
arxiv:
2501.12948
License:
mit
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
deepseek-r1-qwen-7b-q6-exl2
1 contributor
History:
3 commits
patrickbdevaney
Update README.md
ee4b4a9
verified
11 days ago
.gitattributes
Safe
1.52 kB
initial commit
11 days ago
LICENSE
Safe
1.06 kB
Q6 should work on any rtx Nvidia gpu
11 days ago
README.md
Safe
19.2 kB
Update README.md
11 days ago
config.json
Safe
997 Bytes
Q6 should work on any rtx Nvidia gpu
11 days ago
generation_config.json
Safe
181 Bytes
Q6 should work on any rtx Nvidia gpu
11 days ago
model.safetensors.index.json
Safe
28.1 kB
Q6 should work on any rtx Nvidia gpu
11 days ago
output.safetensors
Safe
6.42 GB
LFS
Q6 should work on any rtx Nvidia gpu
11 days ago
tokenizer.json
Safe
7.03 MB
Q6 should work on any rtx Nvidia gpu
11 days ago
tokenizer_config.json
Safe
3.06 kB
Q6 should work on any rtx Nvidia gpu
11 days ago