File size: 1,847 Bytes
a729212
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2f9b5e1
a729212
 
 
 
 
 
 
 
 
 
 
 
 
2f9b5e1
a729212
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
base_model: ibm-granite/granite-embedding-30m-english
language:
- en
library_name: model2vec
license: mit
model_name: granite-embedding-english
tags:
- embeddings
- static-embeddings
- sentence-transformers
---

# granite-embedding-english Model Card

This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the ibm-granite/granite-embedding-30m-english(https://huggingface.co/ibm-granite/granite-embedding-30m-english) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.


## Installation

Install model2vec using pip:
```
pip install model2vec
```

## Usage

### Using Model2Vec

The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models.

Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel

# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("cnmoro/granite-30m-distilled")

# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```

### Using Sentence Transformers

You can also use the [Sentence Transformers library](https://github.com/UKPLab/sentence-transformers) to load and use the model:

```python
from sentence_transformers import SentenceTransformer

# Load a pretrained Sentence Transformer model
model = SentenceTransformer("cnmoro/granite-30m-distilled")

# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```