Engine & Accelerator

class Engine(value)

Software inference engines supported by edgeIQ

DNN: str = 'DNN'

OpenCV’s DNN backend.

Supports:

DNN_CUDA: str = 'DNN_CUDA'

OpenCV’s CUDA Inference Engine backend.

Supports:

TENSOR_RT: str = 'TENSOR_RT'

Tensor RT Engine

Supports:

HAILO_RT: str = 'HAILO_RT'

Hailo RT Engine

Supports:

QAIC_RT: str = 'QAIC_RT'

QAIC RT Engine

Supports:

ONNX_RT: str = 'ONNX_RT'

ONNX RT Engine

Supports:

class Accelerator(value)

HW accelerators supported by edgeIQ

DEFAULT: str = 'DEFAULT'

Selects the default accelerator for the given Engine

CPU: str = 'CPU'

Run the Engine on the CPU

GPU: str = 'GPU'

Run the Engine on the GPU.

NVIDIA: str = 'NVIDIA'

Run the Engine on a NVIDIA gpu accelerator

NVIDIA_FP16: str = 'NVIDIA_FP16'

Run the Engine on a NVIDIA gpu accelerator compressed to floating point 16

NVIDIA_DLA_0: str = 'NVIDIA_DLA_0'

Run the Engine on NVIDIA DLA core 0

NVIDIA_DLA_1: str = 'NVIDIA_DLA_1'

Run the Engine on NVIDIA DLA core 1

HAILO: str = 'HAILO'

Run the Engine on a HAILO accelerator

QAIC: str = 'QAIC'

Run the Engine on a Qualcomm’s QAIC accelerator