Finding Things in an Image in Real Time on the Edge

By Jason Koo • Oct 04, 2019

Recent advances in technology have greatly broadened the scope of object detection and related computer vision (CV) services. Hardware with advanced features paired with smarter neural networks has attracted developers and data scientists from numerous industries to start leveraging computer vision to solve complex business challenges. Combined with the rising popularity of embedded devices capturing data on the edge, computer vision on a grand scale has been exploding with seemingly endless potential to revolutionize the way the world collects and analyzes real-world data.

Software developer working with a computer vision  on an embedded device for object detection

What is Object Detection?

Object detection is the process of identifying objects within images, often done in real time. For example, object detection can identify and isolate instances of cars, humans, bikes, and buses from a real-time video feed of a busy street. It isolates objects of interest from the image through recognition, localization, and classification. The primary goal of object detection is simply to identify and label the presence of objects.

Object detection allows developers to employ additional core computer vision functions — including object tracking and counting, image classification, and more — to equip their devices with machine learning capabilities. For example, training a security camera in a retail shopping to detect the presence of objects in a store, and then be further trained to classify one of those objects — such as a person — more narrowly by gender, age range, or other identifying characteristics. 

Working with Embedded Devices

Working with embedded devices on the edge means building applications for deployment on system-on-chip (SoC) environments such that data processing happens directly on the device rather than on a cloud server. Utilizing embedded devices can resolve a number of issues caused by relying on the cloud, including the latency issues and high bandwidth requirements typically involved in handling image data. 

For example, a simple dual-core ARM chip lacking a GPU can successfully support machine learning applications with memory to spare. However, working with edge devices is a strategic decision. You must consider constraints such as the processing power and storage capabilities of the device.

These device-side considerations affect your development decisions, including the size of the model that you use in your application. A larger model means more processing power, but it can also slow the processing speed of your device. If you know your intended application will be working in a resource-constrained environment, you may choose to use a smaller model or to optimize a large model by quantizing and pruning it to make it more efficient. 

Advantages of Edge Computer Vision

Edge devices do not require an internet connection for functionality as the computing for object detection is done entirely on the device itself. This real-time processing is crucial when safety is a concern — for instance, a self-driving car needs to be able to perform and make decisions without latency issues. It can do this because it does not require a cloud-connected data analytics process. This is a major advantage of embedded systems — they do not need to relay their data, await processing, and then respond according to the results as would be required with cloud-tethered computer vision solutions. Consequently, capturing and processing real-time data at the source is quickly becoming essential for today’s businesses.

By Jason Koo • Oct 04, 2019




Developer stories to your inbox.

Subscribe to the Developer Digest, a monthly dose of all things code.

You may unsubscribe at any time using the unsubscribe link in the digest email. See our privacy policy for more information.

alwaysAI Ad
stylized image of a computer chip

Sign up today and start your project

We can't wait to see what you'll build!