Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
comaniac
/
Mixtral-8x7B-Instruct-v0.1-FP8-v1
like
0
Text Generation
Transformers
Safetensors
mixtral
conversational
text-generation-inference
fp8
Model card
Files
Files and versions
xet
Community
Train
Deploy
Use this model
YAML Metadata Warning:
empty or missing yaml metadata in repo card (
https://huggingface.co/docs/hub/model-cards#model-card-metadata
)
Mixtral-8x7B-Instruct-v0.1-FP8-v1
Evaluation
Mixtral-8x7B-Instruct-v0.1-FP8-v1
Weights and activations are per-tensor quantized to float8_e4m3.
Quantization with AutoFP8.
Calibration dataset: Ultrachat (mgoin/ultrachat_2k)
Samples: 1024
Sequence length: 8192
Evaluation
TBA
Downloads last month
5
Safetensors
Model size
46.7B params
Tensor type
BF16
·
F8_E4M3
·
Chat template
Files info
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support