In this tutorial, we will show you the steps needed to boost the performance of your edge device. You will need a hardware accelerator that is supported by alwaysAI – such as Intel’s Neural Compute Stick 2. We have more information on hardware accelerators in our documentation, which is available now.
1. Begin with a real-time detector starter app and model set
For this example, we will use the alwaysAI real-time object detector starter app, which utilizes the MobileNet SSD model.
2. Change the object detection engine
To use the accelerator, change the DNN to DNN OpenVINO.
3. Re-deploy and run the start command
Then re-deploy the app using alwaysai app deploy and re-start it using the alwaysai app start command.
4. Check your new inference time
Double-check the inference time from your object detector app in the Streamer on your browser.
You can see that after using Intel’s Neural Compute Stick 2, the inference time has dropped to .094 seconds. That's a significant improvement for this edge device.