--- base_model: ibm-granite/granite-embedding-30m-english language: - en library_name: model2vec license: mit model_name: granite-embedding-english tags: - embeddings - static-embeddings - sentence-transformers --- # granite-embedding-english Model Card This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the ibm-granite/granite-embedding-30m-english(https://huggingface.co/ibm-granite/granite-embedding-30m-english) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers. ## Installation Install model2vec using pip: ``` pip install model2vec ``` ## Usage ### Using Model2Vec The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models. Load this model using the `from_pretrained` method: ```python from model2vec import StaticModel # Load a pretrained Model2Vec model model = StaticModel.from_pretrained("cnmoro/granite-30m-distilled") # Compute text embeddings embeddings = model.encode(["Example sentence"]) ``` ### Using Sentence Transformers You can also use the [Sentence Transformers library](https://github.com/UKPLab/sentence-transformers) to load and use the model: ```python from sentence_transformers import SentenceTransformer # Load a pretrained Sentence Transformer model model = SentenceTransformer("cnmoro/granite-30m-distilled") # Compute text embeddings embeddings = model.encode(["Example sentence"]) ```