Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
Ultra-Smaug-125B-v0.1-GGUF
like
5
Text Generation
Transformers
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
llama
Inference Endpoints
text-generation-inference
License:
agpl-3.0
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
main
Ultra-Smaug-125B-v0.1-GGUF
1 contributor
History:
10 commits
MaziyarPanahi
Upload config.json
78894b9
verified
about 1 year ago
.gitattributes
2.62 kB
Create .gitattributes
about 1 year ago
README.md
11 kB
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q3_K_L.gguf-split-ab
13.5 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q3_K_M.gguf-split-aa
32.2 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q3_K_M.gguf-split-ab
28.3 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q3_K_S.gguf-split-aa
32.2 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q3_K_S.gguf-split-ab
21.8 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q4_K_M.gguf-split-aa
32.2 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q4_K_M.gguf-split-ab
32.2 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q4_K_M.gguf-split-ac
10.9 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q4_K_S.gguf-split-aa
32.2 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q4_K_S.gguf-split-ab
32.2 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
Ultra-Smaug-125B-v0.1.Q4_K_S.gguf-split-ac
6.48 GB
LFS
Upload folder using huggingface_hub (#2)
about 1 year ago
config.json
Safe
31 Bytes
Upload config.json
about 1 year ago