ObjectDetection¶
-
class
ObjectDetectionPrediction
(box, confidence, label, index)¶ A single prediction from
ObjectDetection
.- Parameters
box (
BoundingBox
) – The bounding box around the detected object.confidence (
float
) – The confidence of this prediction.label (
str
) – The label describing this prediction result.index (
int
) – The index of this result in the master label list.
-
property
label
¶ The label describing this prediction result.
- Return type
str
-
property
index
¶ The index of this result in the master label list.
- Return type
int
-
property
box
¶ The bounding box around the object.
- Return type
-
property
confidence
¶ The confidence of this prediction.
- Return type
float
-
class
ObjectDetectionResults
(predictions, duration, image, **kwargs)¶ All the results of object detection from
ObjectDetection
.Predictions are stored in sorted order, with descending order of confidence.
- Parameters
predictions (
List
[ObjectDetectionPrediction
]) – The boxes for each prediction.duration (
float
) – The duration of the inference.image (
ndarray
) – The image that the inference was performed on.
-
property
duration
¶ The duration of the inference in seconds.
- Return type
float
-
property
predictions
¶ The list of predictions.
- Return type
-
property
image
¶ The image the results were processed on.
Image is not available when results are obtained from EyeCloud Cameras.
- Return type
ndarray
-
class
ObjectDetection
(model_id, model_config=None)¶ Analyze and discover objects within an image.
Typical usage:
obj_detect = edgeiq.ObjectDetection( 'alwaysai/ssd_mobilenet_v1_coco_2018_01_28') obj_detect.load(engine=edgeiq.Engine.DNN) <get image> results = obj_detect.detect_objects(image, confidence_level=.5) image = edgeiq.markup_image( image, results.predictions, colors=obj_detect.colors) for prediction in results.predictions: text.append("{}: {:2.2f}%".format( prediction.label, prediction.confidence * 100))
- Parameters
model_id (
str
) – The ID of the model you want to use for object detection.model_config (
Optional
[ModelConfig
]) – The model configuration to load. model_id is ignored when model_config is set.
-
detect_objects
(image, confidence_level=0.3, overlap_threshold=0.3)¶ Perform Object Detection on an image
- Parameters
image (
ndarray
) – The image to analyze.confidence_level (
float
) – The minimum confidence level required to successfully accept a detection.overlap_threshold (
float
) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.
- Return type
-
detect_objects_batch
(images, confidence_level=0.3, overlap_threshold=0.3)¶ Perform Object Detection on a list of images
- Parameters
images (
List
[ndarray
]) – The list of images to analyze.confidence_level (
float
) – The minimum confidence level required to successfully accept a detection.overlap_threshold (
float
) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.
- Return type
List
[ObjectDetectionResults
]
-
publish_analytics
(results, tag=None)¶ Publish Object Detection results to the alwaysAI Analytics Service
- Parameters
results (
ObjectDetectionResults
) – The results to publish.tag (
Optional
[Any
]) – Additional information to assist in querying and visualizations.
- Raises
ConnectionBlockedError
when using connection to the alwaysAI Device Agent and resources are at capacity,- Raises
PacketRateError
when publish rate exceeds current limit,- Raises
PacketSizeError
when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.
-
property
accelerator
¶ The accelerator being used.
- Return type
Optional
[Accelerator
]
-
property
colors
¶ The auto-generated colors for the loaded model.
Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.
- Return type
Optional
[ndarray
]
-
property
labels
¶ The labels for the loaded model.
Note: Initialized to None when the model doesn’t have any labels.
- Return type
Optional
[List
[str
]]
-
load
(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)¶ Load the model to an engine and accelerator.
- Parameters
engine (
Engine
) – The engine to load the model toaccelerator (
Accelerator
) – The accelerator to load the model to
-
property
model_config
¶ The configuration of the model that was loaded
- Return type
ModelConfig
-
property
model_id
¶ The ID of the loaded model.
- Return type
str
-
property
model_purpose
¶ The purpose of the model being used.
- Return type
str
-
markup_image
(image, predictions, show_labels=True, show_confidences=True, colors=None, line_thickness=2, font_size=0.5, font_thickness=2)¶ Draw boxes, labels, and confidences on the specified image.
- Parameters
image (
ndarray
) – The image to draw on.predictions (
List
[ObjectDetectionPrediction
]) – The list of prediction results.show_labels (
bool
) – Indicates whether to show the label of the prediction.show_confidences (
bool
) – Indicates whether to show the confidence of the prediction.colors (
Optional
[List
[Tuple
[int
,int
,int
]]]) – A custom color list to use for the bounding boxes. The index of the color will be matched with a label index.line_thickness (
int
) – The thickness of the lines that make up the bounding box.font_size (
float
) – The scale factor for the text.font_thickness (
int
) – The thickness of the lines used to draw the text.
- Return type
ndarray
- Returns
The marked-up image.
-
overlay_transparent_boxes
(image, predictions, alpha=0.5, colors=None, show_labels=False, show_confidences=False)¶ Overlay area(s) of interest within an image. This utility is designed to work with object detection to display colored bounding boxes on the original image.
- Parameters
image (
ndarray
) – The image to manipulate.predictions (
List
[ObjectDetectionPrediction
]) – The list of prediction results.alpha (
float
) – Transparency of the overlay. The closer alpha is to 1.0, the more opaque the overlay will be. Similarly, the closer alpha is to 0.0, the more transparent the overlay will appear.colors (
Optional
[List
[Tuple
[int
,int
,int
]]]) – A custom color list to use for the bounding boxes or object classes pixel mapshow_labels (
bool
) – Indicates whether to show the label of the prediction.show_confidences (
bool
) – Indicates whether to show the confidence of the prediction.
- Returns
The overlaid image.
-
blur_objects
(image, predictions)¶ Blur objects detected in an image.
- Parameters
image (
ndarray
) – The image to draw on.predictions (
List
[ObjectDetectionPrediction
]) – A list of predictions objects to blur.
- Return type
ndarray
- Returns
The image with objects blurred.
-
filter_predictions_by_label
(predictions, label_list)¶ Filter a prediction list by label.
Typical usage:
people_and_apples = edgeiq.filter_predictions_by_label(predictions, ['person', 'apple'])
- Parameters
predictions (
List
[ObjectDetectionPrediction
]) – A list of predictions to filter.label_list (
List
[str
]) – The list of labels to keep in the filtered output.
- Return type
- Returns
The filtered predictions.
-
filter_predictions_by_area
(predictions, min_area_thresh)¶ Filter a prediction list by bounding box area.
Typical usage:
larger_boxes = edgeiq.filter_predictions_by_area(predictions, 450)
- Parameters
predictions (
List
[ObjectDetectionPrediction
]) – A list of predictions to filter.min_area_thresh (
float
) – The minimum bounding box area to keep in the filtered output.
- Return type
- Returns
The filtered predictions.