ObjectDetection¶
-
class
ObjectDetectionPrediction
(box, confidence, label, index)¶ A single prediction from
ObjectDetection
.-
property
label
¶ The label describing this prediction result.
- Type
string
-
property
index
¶ The index of this result in the master label list.
- Type
integer
-
property
box
¶ The bounding box around the object.
- Type
-
property
confidence
¶ The confidence of this prediction.
- Type
float
-
property
-
class
ObjectDetectionResults
(predictions, duration, image, **kwargs)¶ All the results of object detection from
ObjectDetection
.Predictions are stored in sorted order, with descending order of confidence.
-
property
duration
¶ The duration of the inference in seconds.
- Type
float
-
property
predictions
¶ The list of predictions.
- Type
list of
ObjectDetectionPrediction
-
property
image
¶ The image the results were processed on.
Image is not available when results are obtained from EyeCloud Cameras.
- Type
numpy array – The image in BGR format
-
property
-
class
ObjectDetection
(model_id)¶ Analyze and discover objects within an image.
Typical usage:
obj_detect = edgeiq.ObjectDetection( 'alwaysai/ssd_mobilenet_v1_coco_2018_01_28') obj_detect.load(engine=edgeiq.Engine.DNN) <get image> results = obj_detect.detect_objects(image, confidence_level=.5) image = edgeiq.markup_image( image, results.predictions, colors=obj_detect.colors) for prediction in results.predictions: text.append("{}: {:2.2f}%".format( prediction.label, prediction.confidence * 100))
- Parameters
model_id (string) – The ID of the model you want to use for object detection.
-
property
accelerator
¶ The accelerator being used.
- Type
string
-
property
colors
¶ The auto-generated colors for the loaded model.
Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.
- Type
list of (B, G, R) tuples.
-
detect_objects
(image, confidence_level=0.3, overlap_threshold=0.3)¶ Perform Object Detection on an image
- Parameters
image (numpy array of image in BGR format) – The image to analyze.
confidence_level (float in range [0.0, 1.0]) – The minimum confidence level required to successfully accept a detection.
overlap_threshold (float in range [0.0, 1.0]) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.
- Returns
-
property
engine
¶ The engine being used.
- Type
string
-
property
labels
¶ The labels for the loaded model.
Note: Initialized to None when the model doesn’t have any labels.
- Type
list of strings.
-
load
(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)¶
-
property
model_id
¶ The ID of the loaded model.
- Type
string
-
property
model_purpose
¶ The purpose of the model being used.
- Type
string
-
publish_analytics
(results, tag=None)¶ Publish Object Detection results to the alwaysAI Analytics Service
- Parameters
results (
ObjectDetectionResults
) – The results to publish.tag (JSON-serializable object) – Additional information to assist in querying and visualizations.