Developer stories to your inbox.

Subscribe to the Developer Digest, a monthly dose of all things code.

You may unsubscribe at any time using the unsubscribe link in the digest email. See our privacy policy for more information.

alwaysAI Ad

How to Boost Performance on an Edge Device

By Steve Griset Oct 09, 2019

In this tutorial, we will show you the steps needed to boost the performance of your edge device. You will need a hardware accelerator that is supported by alwaysAI – such as Intel’s Neural Compute Stick 2. You can read more about supported edge devices and how to set the engine and accelerator using the edgeiq API in our documentation.

 
 

1. Begin with a real-time detector starter app and model set

For this example, we will use the alwaysAI real-time object detector starter app, which utilizes the MobileNet SSD model. If you haven't downloaded the alwaysAI starter applications, do so now by entering 'aai get-starter-apps' from your terminal. This will download the starter applications into the current working directory. Navigate into the 'realtime_object_detector' directory.

 
Here, our object detector app is detecting a potted plant in our office. Note that the inference time of the device without the accelerator is .710 seconds.

 

2. Change the object detection engine

To use the accelerator, change the DNN to DNN OpenVINO.

 

3. Re-deploy and run the start command

Then re-deploy the app using alwaysai app deploy and re-start it using the alwaysai app start command.

 

4. Check your new inference time

Double-check the inference time from your object detector app in the Streamer on your browser.

 

You can see that after using Intel’s Neural Compute Stick 2, the inference time has dropped to .094 seconds. That's a significant improvement for this edge device.

Get Started Now
We are providing professional developers with a simple and easy-to-use platform to build and deploy computer vision applications on edge devices.
Get Started Now
By Steve Griset Oct 09, 2019