PoseEstimation

class PoseEstimation(model_id)

Find poses within an image.

Typical usage:

pose_estimator = edgeiq.PoseEstimation("alwaysai/human-pose")
pose_estimator.load(
        engine=edgeiq.Engine.DNN_OPENVINO,
        accelerator=edgeiq.Accelerator.MYRIAD)

<get image>
results = pose_estimator.estimate(image)

for ind, pose in enumerate(results.poses):
            print('Person {}'.format(ind))
            print('-'*10)
            print('Key Points:')
            for key_point in pose.key_points:
                print(str(key_point))
image = results.draw_poses(image)
Parameters

model_id (string) – The ID of the model you want to use for pose estimation.

estimate(image)

Estimate poses within the specified image.

Parameters

image (numpy array of image) – The image to analyze.

Returns

HumanPoseResult

property accelerator

The accelerator being used.

Type

string

property colors

The auto-generated colors for the loaded model.

Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.

Type

list of (B, G, R) tuples.

property engine

The engine being used.

Type

string

property labels

The labels for the loaded model.

Note: Initialized to None when the model doesn’t have any labels.

Type

list of strings.

load(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)

Initialize the inference engine and accelerator.

Parameters
  • engine (Engine) – The inference engine to use.

  • accelerator (Accelerator) – The hardware accelerator on which to run the inference engine.

property model_id

The ID of the loaded model.

Type

string

HumanPose

class Pose(key_points, score)
property key_points

Key Points corresponding to body parts.

Body Part Names

Nose

Neck

Right Shoulder

Right Elbow

Right Wrist

Left Shoulder

Left Elbow

Left Wrist

Right Hip

Right Knee

Right Ankle

Left Hip

Left Knee

Left Ankle

Right Eye

Left Eye

Right Ear

Left Ear

Returns

dict of coordinate tuples mapped to their corresponding body part names

property score

Confidence level associated with pose.

Type

float in range [0.0, 1.0]

class HumanPoseResult(net, image, output_layer_names, network_height)

The results of pose estimation from PoseEstimation.

property image

The image the results were processed on.

Type

numpy array – The image in BGR format

property poses

Poses found in image.

Type

list of Pose

property duration

The duration of the inference in seconds.

Type

float

draw_poses_background(color)

Draw poses found on image on a background color.

Parameters

color (tuple which contains values for B, G, R color channels.) – The color of the background in which the poses will be drawn on.

Returns

image: numpy array of image in BGR format

draw_poses(image=None)

Draws poses found on image.

Parameters

image (numpy.array) – An image to draw the poses found

Returns

image: numpy array of image in BGR format

property raw_results

The raw results returned from model.

Type

list of 2 numpy arrays shaped (1, 38, 32, 57) (1, 19, 32, 57) respectively, data type float32

property feature_maps

The feature map returned by the model.

Type

numpy arrays shape (18, 32, 57), data type float

property partial_affinity_fields

The partial affinity fields returned by the model.

Type

numpy arrays shape (38, 32, 57), data type float

draw_aliens()
Returns

image: numpy array of image in BGR format

class TRTPoseResult(result_data, duration, image, network_height)

The results of pose estimation from PoseEstimation when using TENSOR_RT.