Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
jonasaise
/
mixtral-8x7b-lora-instruct-swe-v2
like
0
Text Generation
Transformers
Safetensors
mixtral
text-generation-inference
4-bit precision
bitsandbytes
Model card
Files
Files and versions
xet
Community
Train
Deploy
Use this model
YAML Metadata Warning:
empty or missing yaml metadata in repo card (
https://huggingface.co/docs/hub/model-cards#model-card-metadata
)
Finetuned merged lora adaptors to Mixtral-8x7b-instruct-v0.1 on Swedish instruct data
license: apache-2.0 datasets: - jeremyc/Alpaca-Lora-GPT4-Swedish language: - sv
license: apache-2.0
Finetuned merged lora adaptors to Mixtral-8x7b-instruct-v0.1 on Swedish instruct data
You likely need tokenizer and tokenizer.config from original model to load properly.
license: apache-2.0 datasets: - jeremyc/Alpaca-Lora-GPT4-Swedish language: - sv
license: apache-2.0
Downloads last month
4
Safetensors
Model size
24.2B params
Tensor type
F32
路
F16
路
U8
路
Files info
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support