Handler file for choosing the correct version of ONNX Runtime, based on the environment.
Ideally, we could import the onnxruntime-web
and onnxruntime-node
packages only when needed,
but dynamic imports don’t seem to work with the current webpack version and/or configuration.
This is possibly due to the experimental nature of top-level await statements.
So, we just import both packages, and use the appropriate one based on the environment:
onnxruntime-node
.onnxruntime-web
(onnxruntime-node
is not bundled).This module is not directly exported, but can be accessed through the environment variables:
import { env } from '@xenova/transformers';
console.log(env.backends.onnx);
.deviceToExecutionProviders([device])
⇒ *
.createInferenceSession(buffer, session_options)
⇒ *
.isONNXTensor(x)
⇒ boolean
.isONNXProxy()
⇒ boolean
Map a device to the execution providers to use for the given device.
Kind: static method of backends/onnx
Returns: *
- The execution providers to use for the given device.
Param | Type | Default | Description |
---|---|---|---|
[device] | * |
| (Optional) The device to run the inference on. |
Create an ONNX inference session.
Kind: static method of backends/onnx
Returns: *
- The ONNX inference session.
Param | Type | Description |
---|---|---|
buffer | Uint8Array | The ONNX model buffer. |
session_options | Object | ONNX inference session options. |
Check if an object is an ONNX tensor.
Kind: static method of backends/onnx
Returns: boolean
- Whether the object is an ONNX tensor.
Param | Type | Description |
---|---|---|
x | any | The object to check |
Check if ONNX’s WASM backend is being proxied.
Kind: static method of backends/onnx
Returns: boolean
- Whether ONNX’s WASM backend is being proxied.
Kind: inner property of backends/onnx
Kind: inner constant of backends/onnx