Tools¶
Performance¶
-
class
FPS
¶ Monitor the frames per second (FPS) of video streams for performance tracking.
Typical usage:
fps = edgeiq.FPS().start() while True: <main processing loop> fps.update() # Get the elapsed time and FPS fps.stop() print("Elapsed seconds: {}".format(fps.get_elapsed_seconds())) print("FPS: {}".format(fps.compute_fps()))
compute_fps()
may also be called in the main processing loop to compute an instantaneous estimate of the FPS.-
start
()¶ Start tracking FPS.
-
stop
()¶ Stop tracking FPS.
-
update
()¶ Increment the total number of frames examined during the start and end intervals.
- Raises
RuntimeError
-
get_elapsed_seconds
()¶ Return the total number of seconds between the start and end intervals.
- Returns
float – The elapsed time in seconds between start and end, or since start if stop() has not been called.
-
compute_fps
()¶ Compute the (approximate) frames per second.
- Returns
float – the approximate frames per second.
-
Image Manipulation¶
-
translate
(image, x, y)¶ Translate an image.
- Parameters
image (numpy array of image) – The image to manipulate.
x (integer) – Translate image on X axis by this amount.
y (integer) – Translate image on Y axis by this amount.
- Returns
numpy array – The translated image.
-
rotate
(image, angle)¶ Rotate an image by specified angle.
- Parameters
image (numpy array of image) – The image to manipulate.
angle (integer) – The angle to rotate the image by (degrees).
- Returns
numpy array – The rotated image.
-
resize
(image, width=None, height=None, keep_scale=True, inter=3)¶ Resize an image to specified height and width.
When both a width and height are given and keep_scale is True, these are treated as the maximum width and height.
- Parameters
image (numpy array of image) – The image to manipulate.
height (integer) – The new height of image.
width (integer) – The new width of image.
keep_scale (boolean) – Maintain the original scale of the image.
inter (integer) – The interpolation method (One of OpenCV InterpolationFlags).
- Returns
numpy array – The resized image.
-
markup_image
(image, predictions, show_labels=True, show_confidences=True, colors=None, line_thickness=2, font_size=0.5, font_thickness=2)¶ Draw boxes, labels, and confidences on the specified image.
- Parameters
image (numpy array of image in BGR format) – The image to draw on.
predictions (list of
ObjectDetectionPrediction
) – The list of prediction results.show_labels (boolean) – Indicates whether to show the label of the prediction.
show_confidences (boolean) – Indicates whether to show the confidence of the prediction.
colors (list of tuples in the format (B, G, R)) – A custom color list to use for the bounding boxes. The index of the color will be matched with a label index.
line_thickness (float) – The thickness of the lines that make up the bounding box.
font_size (float) – The scale factor for the text.
font_thickness (float) – The thickness of the lines used to draw the text.
- Returns
numpy array – The marked-up image.
-
transparent_overlay_boxes
(image, predictions, alpha=0.5, colors=None, show_labels=False, show_confidences=False)¶ Overlay area(s) of interest within an image. This utility is designed to work with object detection to display colored bounding boxes on the original image.
- Parameters
image (numpy array of image in BGR format) – The image to manipulate.
predictions (list of
ObjectDetectionPrediction
) – The list of prediction results.alpha (float in range [0.0, 1.0]) – Transparency of the overlay. The closer alpha is to 1.0, the more opaque the overlay will be. Similarly, the closer alpha is to 0.0, the more transparent the overlay will appear.
colors (list of tuples in the format (B, G, R)) – A custom color list to use for the bounding boxes or object classes pixel map
show_labels (boolean) – Indicates whether to show the label of the prediction.
show_confidences (boolean) – Indicates whether to show the confidence of the prediction.
- Returns
numpy array – The overlayed image.
-
pad_to_aspect_ratio
(image, a_ratio)¶ Pad an image to a certain aspect ration. Padding is added to the bottom and right of the image
-
cutout_image
(image, box)¶ Cut out the portion of an image outlined by a bounding box.
- Parameters
image (numpy array of image) – The image to cut out from.
box (
BoundingBox
) – The bounding box outlining the section of the image to cut out.
- Returns
numpy array – The segment of the image outlined by the bounding box. Will be independent from the original image.
-
blur_objects
(image, predictions)¶ Blur objects detected in an image.
- Parameters
image (numpy array of image) – The image to draw on.
predictions (list of
ObjectDetectionPrediction
) – A list of predictions objects to blur.
- Returns
numpy array – The image with objects blurred.
-
perform_histogram_equalization
(image, color_space='GS', adaptive=False, clip_limit=2.0, tile_grid_size=(8, 8))¶ Performs Histogram Equalization on the input image and returns the equalized image.
Histogram equalization is a basic image processing technique that adjusts the global contrast of an image by updating the image histogram’s pixel intensity distribution. Doing so enables areas of low contrast to obtain higher contrast in the output image. This function includes implementations of both basic and adaptive histogram equalization. The basic histogram equalization will spread pixels to intensity “buckets” that don’t have as many pixels binned to them. Mathematically, what this means is that the function is applying a linear trend to the image’s cumulative distribution function (CDF). The adaptive histogram equalization function divides an input image into an M x N grid, and then applies equalization to each cell in the grid, resulting in a higher quality output image.
- Parameters
image (numpy array of image (Gray Scaled or in BGR format)) – The image on which we will do Histogram Equalization operation.
color_space (string) – The color space of the image on which we will do Histogram Equalization. Supported color_space parameters: [“GS”, “YCrCb”, “YUV”, “HSV”, “LAB”]. If ‘color_space’ = “GS”, output image will be in gray scaled format(2D array). If ‘color_space’ != “GS”, output image will be in BGR format(3D array).
adaptive (boolean (either True or False)) – Whether we want to enable adaptive Histogram Equalization or not.
clip_limit (float/integer) – The clip limit value for Adaptive Histogram Equalization. The ‘clip_limit’ is used only if ‘adaptive’ = True. ‘clip_limit’ value is the threshold for contrast limiting. Typically it is adviced to use the value ranging from 2-5. Allowed range is 0-40. Larger values results in more local contrast and more noise. Try to keep the ‘clip_limit’ value as low as possible.
tile_grid_size (list/tuple/array of length 2 (integer values only)) – Number of grids we want to divide the image into for Adaptive Histogram Equalization. The ‘tile_grid_size’ is used only if ‘adaptive’ = True.
- Returns
numpy array – The image after doing Histogram Equalization(Gray Scaled or in BGR format)
-
perform_gamma_correction
(image, gamma_value=0.8, color=False)¶ Performs gamma correction operation on the input image and returns the corrected image.
Gamma correction is done when you want to control a camera sensor’s color and luminace. Gamma correction is also known as the Power Law Transform: O = I ^ (1 / G) I = imput image O = scaled back to the range [0, 255] G = gamma value, should be greater than 0. For gamma values < 1 will shift the image towards the darker end of the spectrum For gamma values > 1 will shift will make the image appear lighter For gamma value of 1 will have no effect
- Parameters
image (numpy array of image (2D or 3D array)) – The image on which we will do Gamma Correction operation.
color (boolean (either True or False)) – True will do gamma correction on BGR image and False on Gray-Scaled image. If ‘color’ = True, the output image will be in BGR format(3D array). If ‘color’ = False, the output image will be gray sclaed format(2D array).
gamma_value (float/integer) – The gamma value for Gamma Correction.
- Returns
numpy array – The image after doing Gamma Correction(2D or 3D array)
-
blend_images
(foreground_image, background_image, alpha)¶ Blend a foreground image with a background image, foreground image and background image must have the same dimensions and same color format (RGB/BGR).
- Parameters
foreground_image (numpy array of image) – The image to be scaled by alpha in the blend.
background_image (numpy array of image) – The image to be scaled by 1 - alpha in the blend.
alpha (float in range [0.0, 1.0]) – The ratio of foreground to background image in the blend.
- Returns
numpy array – The blended image.
-
overlay_image
(foreground_image, background_image, foreground_mask)¶ Overlay a foreground image with a background image according to the foreground mask.
This function will mask both the foreground and background images, then combine them into the output image.
- Parameters
foreground_image (numpy array of image in BGR format) – The image to be overlaid on the background.
background_image (numpy array of image in BGR format) – The image for the foreground to be overlaid on.
foreground_mask (numpy array of image in BGR format) – A mask with white indicating foreground and black indicating background. Shades in between will blend the foreground and background accordingly.
- Returns
numpy array – The overlaid image.
Filesystem¶
-
list_images
(base_path, contains=None)¶ List all images in specified path.
Finds images with the following extensions:
.jpg
.jpeg
.png
.bmp
.tif
.tiff
- Parameters
base_path (string) – The base path of folder where images are located.
contains (string) – Select only filenames that contain this string.
- Returns
list of strings – The valid image file paths.
-
list_files
(base_path, valid_exts=('.jpg', '.jpeg', '.png', '.bmp', '.tif', '.tiff'), contains=None)¶ List all files in specified path.
- Parameters
base_path (string) – The base path of folder where files are located.
valid_exts (list of strings) – The list of valid extentions to filter for.
contains (string) – Select only filenames that contain this string.
- Returns
list of strings – The valid file paths.
Results Filtering¶
-
filter_predictions_by_label
(predictions, label_list)¶ Filter a prediction list by label.
Typical usage:
people_and_apples = edgeiq.filter_predictions_by_label(predictions, ['person', 'apple'])
- Parameters
predictions (list of
ObjectDetectionPrediction
) – A list of predictions to filter.label_list (list of strings) – The list of labels to keep in the filtered output.
- Returns
list of
ObjectDetectionPrediction
– The filtered predictions.
-
filter_predictions_by_area
(predictions, min_area_thresh)¶ Filter a prediction list by bounding box area.
Typical usage:
larger_boxes = edgeiq.filter_predictions_by_area(predictions, 450)
- Parameters
predictions (list of
ObjectDetectionPrediction
) – A list of predictions to filter.min_area_thresh (integer) – The minimum bounding box area to keep in the filtered output.
- Returns
list of
ObjectDetectionPrediction
– The filtered predictions.
Results Serialization¶
-
to_json_serializable
(results_input)¶ Takes in core Computer Vision service results, such as
ObjectDetectionResults
,ClassificationResults
,HumanPoseResult
, or results returned by calling the update() method on any of the tracking classes, such asCentroidTracker
and returns them in a JSON-serializable format.Typical usage:
... results = obj_detect.detect_objects(frame, confidence_level=.5) results = edgeiq.to_json_serializable(results)
- Parameters
results_input (A core Computer Vision service result object.) – The object to serialize.
- Returns
a json serlializable object.
HW Discovery¶
-
find_usb_device
(id_vendor, id_product)¶ Check if a USB device is connected.
- Parameters
id_vendor (integer) – The vendor ID.
id_product (integer) – The product ID.
- Returns
True if device found, otherwise False
-
find_ncs2
()¶ Check if an NCS2 is connected.
Note that once a connection to the NCS2 is opened, the device’s VID and PID will change and this function will no longer find it.
- Returns
True if NCS2 found, otherwise False
-
is_jetson
()¶ Determine if running on a NVIDIA Jetson device.
- Returns
True if running on a NVIDIA Jetson device, otherwise False
-
is_jetson_nano
()¶ Determine if running on a NVIDIA Jetson Nano.
- Returns
True if running on a NVIDIA Jetson Nano, otherwise False
-
is_jetson_xavier_nx
()¶ Determine if running on a NVIDIA Jetson Xavier NX.
- Returns
True if running on a NVIDIA Jetson Xavier NX, otherwise False
-
is_jetson_agx_xavier
()¶ Determine if running on a NVIDIA Jetson AGX Xavier.
- Returns
True if running on a NVIDIA Jetson AGX Xavier, otherwise False
Analytics¶
-
load_analytics_results
(filepath)¶ Load results from file published by the alwaysAI Analytics Service.
- Parameters
filepath (string) – The full path to the file to load.
- Returns
A list of the deserialized results. Each deserialized result will include a tag property.
Typical usage:
deserialized_results = edgeiq.load_analytics_results('logs/analytics.txt') left_camera_results = [result for result in deserialized_results if 'left' in result.tag] right_camera_results = [result for result in deserialized_results if 'right' in result.tag]
-
publish_analytics
(results, tag=None)¶ Publish data to the alwaysAI Analytics Service
- Parameters
results (JSON-serializable object.) – The results to publish.
tag (JSON-serializable object) – Additional information to assist in querying and visualizations.