File size: 1,566 Bytes
778b920
 
 
 
 
 
 
d5305db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
778b920
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
library_name: transformers.js
base_model: jmtzt/ijepa_vith16_1k
---

https://huggingface.co/jmtzt/ijepa_vith16_1k with ONNX weights to be compatible with Transformers.js.

## Usage (Transformers.js)

If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```

**Example:** Image feature extraction with `onnx-community/ijepa_vith16_1k`.

```js
import { pipeline, cos_sim } from "@huggingface/transformers";

// Create an image feature extraction pipeline
const extractor = await pipeline(
  "image-feature-extraction",
  "onnx-community/ijepa_vith16_1k",
  { dtype: "q8" },
);

// Compute image embeddings
const url_1 = "http://images.cocodataset.org/val2017/000000039769.jpg"
const url_2 = "http://images.cocodataset.org/val2017/000000219578.jpg"
const output = await extractor([url_1, url_2]);
const pooled_output = output.mean(1); // Apply mean pooling

// Compute cosine similarity
const similarity = cos_sim(pooled_output[0].data, pooled_output[1].data);
console.log(similarity); // 0.5334921616321957
```

---

Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).