For those trying to shoe horn this large model on your machine every GB of saved memory counts when offloading to System RAM!

Here is a pruned down the 22.2 Billion parameter model by 2 junk layers to make a 21.5B that doesnt appear to lose any sense of quality.

https://huggingface.co/mistralai/Codestral-22B-v0.1

Downloads last month
16
Safetensors
Model size
21.5B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for TroyDoesAI/Codestral-21B-Pruned

Quantizations
2 models