Update README.md
Browse filesA 2:4 sparse version of LLaMA2-7B model. Trained from AAAI25 Paper "Pruning Large Language Models with Semi-Structural Adaptive Sparse Training".
A 2:4 sparse version of LLaMA2-7B model. Trained from AAAI25 Paper "Pruning Large Language Models with Semi-Structural Adaptive Sparse Training".