Pose Estimation

The Pose Estimation service takes an image of a human and assigns 18 key points to features in that image which correspond to specific body parts, and which allow one to determine how these parts are positioned. Pose Estimation has many use cases, including activity recognition and augmented reality.

../_images/pose_estimation.png

Pose Estimation can be performed on an image using the PoseEstimation class. The first step is to instantiate a PoseEstimation object with the ID of the model to use. For example:

pose_estimator = edgeiq.PoseEstimation("alwaysai/human-pose")

Next, call the object’s load() function to initialize the inference engine and accelerator.

pose_estimator.load(engine=edgeiq.Engine.DNN)

Unless directly specified, the accelerator chosen will be the default for the provided Engine. Now the pose estimator is ready.

The returned results object is of type HumanPoseResult and contains an array of the key points indicating body parts, where the order of the parts in the array is as follows:

Body Part

Output

Nose

0

Neck

1

Right Shoulder

2

Right Elbow

3

Right Wrist

4

Left Shoulder

5

Left Elbow

6

Left Wrist

7

Right Hip

8

Right Knee

9

Right Ankle

10

Left Hip

11

Left Knee

12

Left Ankle

13

Right Eye

14

Left Eye

15

Right Ear

16

Left Ear

17