By default, Transformers.js uses hosted pretrained models and precompiled WASM binaries, which should work out-of-the-box. You can customize this as follows:
import { env } from '@huggingface/transformers';
// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
// Disable the loading of remote models from the Hugging Face Hub:
env.allowRemoteModels = false;
// Set location of .wasm files. Defaults to use a CDN.
env.backends.onnx.wasm.wasmPaths = '/path/to/files/';
For a full list of available settings, check out the API Reference.
We recommend using our conversion script to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses 🤗 Optimum to perform conversion and quantization of your model.
python -m scripts.convert --quantize --model_id <model_name_or_path>
For example, convert and quantize bert-base-uncased using:
python -m scripts.convert --quantize --model_id bert-base-uncased
This will save the following files to ./models/
:
bert-base-uncased/
├── config.json
├── tokenizer.json
├── tokenizer_config.json
└── onnx/
├── model.onnx
└── model_quantized.onnx
For the full list of supported architectures, see the Optimum documentation.
< > Update on GitHub