Changing the Engine and Accelerator

An Engine and Accelerator must be specified for any class running a computer vision model. The engine is the software backend running the model, and the accelerator is the target the engine is running on. Specify the engine and accelerator in the load() function (if you only provide the engine in the code, the accelerator listed as default for that engine will be used). Our edgeIQ library currently supports the following engine/accelerator combinations:

1. OpenCV’s DNN backend engine edgeiq.Engine.DNN with the accelerator edgeiq.Accelerator.GPU (default), which attempts to run the model on the GPU and falls back to the CPU if not available, or with the accelerator edgeiq.Accelerator.CPU, which runs the model on the CPU.

To use this option, the arguments would be:

obj_detect.load(engine=edgeiq.Engine.DNN, accelerator=edgeiq.Accelerator.GPU)

Or:

obj_detect.load(engine=edgeiq.Engine.DNN, accelerator=edgeiq.Accelerator.CPU)

2. OpenCV’s OpenVINO inference engine backend edgeiq.Engine.DNN_OPENVINO with the accelerator edgeiq.Accelerator.MYRIAD (default) runs the model on a Myriad device, such as the Intel NCS1 and NCS2. To use this option, the arguments would be:

obj_detect.load(engine=edgeiq.Engine.DNN_OPENVINO, accelerator=edgeiq.Accelerator.MYRIAD)

3. OpenCV’s CUDA backend engine with the accelerator edgeiq.Accelerator.NVIDIA. To run a model on an NVIDIA Jetson Nano, you would use the following:

obj_detect.load(engine=edgeiq.Engine.DNN_CUDA, accelerator=edgeiq.Accelerator.NVIDIA)

4. NVIDIA’s TensorRT backend engine. To run a model on an NVIDIA Jetson Nano using TensorRT engine, you would use the following:

pose_estimator.load(engine=edgeiq.Engine.TENSOR_RT)