Handler file for choosing the correct version of ONNX Runtime, based on the environment.
Ideally, we could import the onnxruntime-web and onnxruntime-node packages only when needed,
but dynamic imports don’t seem to work with the current webpack version and/or configuration.
This is possibly due to the experimental nature of top-level await statements.
So, we just import both packages, and use the appropriate one based on the environment:
onnxruntime-node.onnxruntime-web (onnxruntime-node is not bundled).This module is not directly exported, but can be accessed through the environment variables:
import { env } from '@huggingface/transformers';
console.log(env.backends.onnx);.deviceToExecutionProviders([device]) ⇒ Array.<ONNXExecutionProviders>.createInferenceSession(buffer, session_options, session_config) ⇒ *.isONNXTensor(x) ⇒ boolean.isONNXProxy() ⇒ boolean~defaultDevices : Array.<ONNXExecutionProviders>~wasmInitPromise : Promise<any> | null~DEVICE_TO_EXECUTION_PROVIDER_MAPPING : *~supportedDevices : *~ONNX_ENV : *~ONNXExecutionProviders : *Map a device to the execution providers to use for the given device.
Kind: static method of backends/onnx
Returns: Array.<ONNXExecutionProviders> - The execution providers to use for the given device.
| Param | Type | Default | Description |
|---|---|---|---|
| [device] | * | | (Optional) The device to run the inference on. |
Create an ONNX inference session.
Kind: static method of backends/onnx
Returns: * - The ONNX inference session.
| Param | Type | Description |
|---|---|---|
| buffer | Uint8Array | The ONNX model buffer. |
| session_options | * | ONNX inference session options. |
| session_config | Object | ONNX inference session configuration. |
Check if an object is an ONNX tensor.
Kind: static method of backends/onnx
Returns: boolean - Whether the object is an ONNX tensor.
| Param | Type | Description |
|---|---|---|
| x | any | The object to check |
Check if ONNX’s WASM backend is being proxied.
Kind: static method of backends/onnx
Returns: boolean - Whether ONNX’s WASM backend is being proxied.
Kind: inner property of backends/onnx
To prevent multiple calls to initWasm(), we store the first call in a Promise
that is resolved when the first InferenceSession is created. Subsequent calls
will wait for this Promise to resolve before creating their own InferenceSession.
Kind: inner property of backends/onnx
Kind: inner constant of backends/onnx
The list of supported devices, sorted by priority/performance.
Kind: inner constant of backends/onnx
Kind: inner constant of backends/onnx
Kind: inner typedef of backends/onnx