Shears Model Card: shears-mpt-7b-50-base

The sparsified MPT-7B with 50% sparsity as a base model in Shears.

Model Sources

Repository: https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears

Paper:

Citation

@inproceedings{munoz-etal-2024-shears,
    title = "Shears: Unstructured Sparsity with Neural Low-rank Adapter Search",
    author = "Mu{\~n}oz, J. Pablo  and
      Yuan, Jinjie  and
      Jain, Nilesh",
    editor = "Yang, Yi  and
      Davani, Aida  and
      Sil, Avi  and
      Kumar, Anoop",
    booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.naacl-industry.34",
    doi = "10.18653/v1/2024.naacl-industry.34",
    pages = "395--405",
}

Acknowledgement

Thanks to the work Wanda (paper, code), which provides a simple but effective pruning approach.

License

Apache-2.0

Downloads last month
175
Safetensors
Model size
6.65B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Collection including IntelLabs/shears-mpt-7b-50-base