Model Card for Model ID
This repo contains a 2:4 sparse version of the LLaMA2-7B model. Trainied with methods from AAAI25 paper Pruning Large Language Models with Semi-Structural Adaptive Sparse Training.
Model Description
Same structured as LLaMA2-7B, but weight from linear layer conform to 2:4 sparse pattern.
- Downloads last month
- 17
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for Yellowtree/LLaMA2-7B_2-by-4_Sparse
Base model
meta-llama/Llama-2-7b-hf