backends/onnx

Handler file for choosing the correct version of ONNX Runtime, based on the environment. Ideally, we could import the onnxruntime-web and onnxruntime-node packages only when needed, but dynamic imports don’t seem to work with the current webpack version and/or configuration. This is possibly due to the experimental nature of top-level await statements. So, we just import both packages, and use the appropriate one based on the environment:

This module is not directly exported, but can be accessed through the environment variables:

import { env } from '@xenova/transformers';
console.log(env.backends.onnx);

backends/onnx.deviceToExecutionProviders([device]) ⇒ <code> * </code>

Map a device to the execution providers to use for the given device.

Kind: static method of backends/onnx
Returns: * - The execution providers to use for the given device.

ParamTypeDefaultDescription
[device]*

(Optional) The device to run the inference on.


backends/onnx.createInferenceSession(buffer, session_options) ⇒ <code> * </code>

Create an ONNX inference session.

Kind: static method of backends/onnx
Returns: * - The ONNX inference session.

ParamTypeDescription
bufferUint8Array

The ONNX model buffer.

session_optionsObject

ONNX inference session options.


backends/onnx.isONNXTensor(x) ⇒ <code> boolean </code>

Check if an object is an ONNX tensor.

Kind: static method of backends/onnx
Returns: boolean - Whether the object is an ONNX tensor.

ParamTypeDescription
xany

The object to check


backends/onnx.isONNXProxy() ⇒ <code> boolean </code>

Check if ONNX’s WASM backend is being proxied.

Kind: static method of backends/onnx
Returns: boolean - Whether ONNX’s WASM backend is being proxied.


backends/onnx~defaultExecutionProviders : <code> * </code>

Kind: inner property of backends/onnx


backends/onnx~supportedExecutionProviders : <code> * </code>

Kind: inner constant of backends/onnx


< > Update on GitHub