ObjectTracking

class CorrelationTracker(max_objects=None)

Track objects based on a correlation tracking algorithm.

Typical usage:

obj_detect = edgeiq.ObjectDetection(
        'alwaysai/ssd_mobilenet_v1_coco_2018_01_28')
obj_detect.load(engine=edgeiq.Engine.DNN)

tracker = edgeiq.CorrelationTracker(max_objects=5)

while True:
    <get video frame>

    # Perform detection once every detect_period
    if frame_idx % detect_period:
        results = obj_detect.detect_objects(frame, confidence_level=.5)

        # Stop tracking old objects
        if tracker.count:
            tracker.stop_all()

        for prediction in results.prediction:
            tracker.start(frame, prediction)

    else:
        if tracker.count:
            predictions = tracker.update(frame)

    frame = edgeiq.markup_image(
                frame, predictions, colors=obj_detect.colors)
Parameters

max_objects (integer) – The maximum number of objects to track.

start(image, prediction)

Start tracking an object.

Parameters
  • image (numpy array of an image or video frame) – The image or frame to provide objects to be tracked.

  • prediction (ObjectDetectionPrediction) – The object to track.

property count

Get the number of objects being tracked.

Type

integer

update(image)

Update the tracker with a new image.

Parameters

image (numpy array of an image or video frame) – The image or frame to provide objects to be tracked

Returns

list of ObjectDetectionPrediction – The bounding boxes updated with new location and tracker confidence.

stop_all()

Stop tracking all objects currently being tracked.

class CentroidTracker(deregister_frames=50, max_distance=50)

Associate a bounding box with an object ID based on distances from previous detections.

Typical usage:

obj_detect = edgeiq.ObjectDetection(
        'alwaysai/res10_300x300_ssd_iter_140000')
obj_detect.load(engine=edgeiq.Engine.DNN)

centroid_tracker = edgeiq.CentroidTracker(
        deregister_frames=20, max_distance=50)

while True:
    <get video frame>
    results = obj_detect.detect_objects(frame, confidence_level=.5)

    objects = centroid_tracker.update(results.predictions)

    # Use the object dictionary to create a new prediction list
    for (object_id, prediction) in objects.items():
        new_label = 'face {}'.format(object_id)
        prediction.label = new_label
        text.append(new_label)
        predictions.append(prediction)

    frame = edgeiq.markup_image(frame, predictions)
Parameters
  • deregister_frames (integer) – The number of frames before deregistering an object that can no longer be found.

  • max_distance (integer) – The maximum distance between two centroids to associate an object.

update(predictions)

Update object centroids based on a new set of bounding boxes.

Parameters

predictions (list of ObjectDetectionPrediction) – The list of bounding boxes to track.

Returns

A dictionary that utilizes the object ID as the key and the ObjectDetectionPrediction as the value.