SemanticSegmentation

class SemanticSegmentationResults(class_map, start, end)

The results of semantic segmentation from SemanticSegmentation.

property duration

The duration of the inference in seconds.

Type

float

property class_map

The class label with the highest probability for each and every (x, y)-coordinate in the image.

Type

numpy array

class SemanticSegmentation(model_id)

Classify every pixel in an image.

The build_legend() is useful when used with the Streamer.

Typical usage:

semantic_segmentation = edgeiq.SemanticSegmentation(
        'alwaysai/enet')
semantic_segmentation.load(engine=edgeiq.Engine.DNN)

with edgeiq.Streamer() as streamer:
    <get image>
    results = semantic_segmentation.segment_image(image)

    text = 'Inference time: {:1.3f} s'.format(results.duration)
    text.append('Legend:')
    text.append(semantic_segmentation.build_legend())

    mask = semantic_segmentation.build_image_mask(results.class_map)
    blended = edgeiq.blend_images(image, mask, alpha=0.5)

    streamer.send_data(blended, text)
Parameters

model_id (string) – The ID of the model you want to use for semantic segmentation.

segment_image(image)

Classify every pixel within the specified image.

Parameters

image (numpy array of image) – The image to analyze.

Returns

SemanticSegmentationResults

build_image_mask(class_map)

Create an image mask by mapping colors to the class map. Colors can be set by the colors attribute.

Parameters

class_map (numpy array) – The class label with the highest probability for each and every (x, y)-coordinate in the image

Returns

numpy array – Class color visualization for each pixel in the original image.

build_legend()

Create a class legend that associates color with a class object

Returns

string – An HTML table with class labels and colors that can be used with the streamer.

build_object_map(class_map, class_list)

Create a object map by isolating classes within the class map.

Parameters
  • class_map (numpy array of integers) – The class with the highest probability for each and every (x, y)-coordinate in the image

  • class_list (list of strings) – The list of labels to include in the object map.

Returns

numpy array of integers – The specific classes from the class list for each and every (x, y)-coordinate in the original image. Other classes not in the specified class list are rendered as non-labled or background.

property accelerator

The accelerator being used.

Type

string

property colors

The auto-generated colors for the loaded model.

Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.

Type

list of (B, G, R) tuples.

property engine

The engine being used.

Type

string

property labels

The labels for the loaded model.

Note: Initialized to None when the model doesn’t have any labels.

Type

list of strings.

load(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)

Initialize the inference engine and accelerator.

Parameters
  • engine (Engine) – The inference engine to use.

  • accelerator (Accelerator) – The hardware accelerator on which to run the inference engine.

property model_id

The ID of the loaded model.

Type

string