Developer stories to your inbox.

Subscribe to the Developer Digest, a monthly dose of all things code.

You may unsubscribe at any time using the unsubscribe link in the digest email. See our privacy policy for more information.

alwaysAI Ad

Build Your Own Posture Corrector with Pose Estimation

By Lila Mullany Jul 09, 2020

Many of us spend most of our days hunched over a desk, leaning forward looking at a computer screen, or slumped down in our chair. If you’re like me, you’re only reminded of your bad posture when your neck or shoulders hurt hours later, or you have a splitting migraine. Wouldn’t it be great if someone could remind you to sit up straight? The good news is, you can remind yourself! In this tutorial, we’ll build a posture corrector app using a pose estimation model available from alwaysAI.

To complete the tutorial, you must have:

  1. An alwaysAI account (it’s free!)
  2. alwaysAI set up on your machine (also free)
  3. A text editor such as sublime or an IDE such as PyCharm, both of which offer free versions, or whatever else you prefer to code in

Please see the alwaysAI blog for more background on computer vision, developing models, how to change models, and more. You can find step-by-step tutorials on development machine set up for Mac and Windows on the blog as well.

All of the code from this tutorial is available on GitHub.

Let’s get started! 

After you have your free account and have set up your developer environment, download the starter apps; do so using this link before proceeding with the rest of the tutorial. We’ll build the posture corrector by modifying the ‘realtime_pose_detector’ starter app. You may want to copy the contents into a new directory, so you retain the original code.

There will be three main parts to this tutorial:

  1. The configuration file
  2. The main application
  3. The utility class for detecting poor posture

Creation of the Configuration File

Create this file as specified in this tutorial. For this example app we need one configuration variable (and more if you want them): scale, which is an int and will be used to tailor the sensitivity of the posture functions.

Now the configuration is all set up!

Creation of the App

Add the following import statements to the top of your app.py file:

import os
import json
from posture import CheckPosture

We need 'json' to parse the configuration file, and ‘CheckPosture’ is the utility class for detecting poor posture, which we’ll define later in this tutorial.

NOTE: You can change the engine and the accelerator you use in this app depending on your deployment environment. Since I am developing on a Mac, I chose the engine to be ‘DNN’, and so I changed the engine parameter to be ‘edgeiq.Engine.DNN’. I also changed the accelerator to be ‘CPU’. You can read more about the accelerator options here, and more about the engine options here.

Next, remove the following lines from app.py:

text.append("Key Points:")
for key_point in pose.key_points:
text.append(str(key_point))

Add the following lines to replace the ones you just removed (right under the ‘text.append’ statements):

# update the instance key_points to check the posture
posture.set_key_points(pose.key_points)

# play a reminder if you are not sitting up straight
correct_posture = posture.correct_posture()
if not correct_posture:
   text.append(posture.build_message())

# make a sound to alert the user to improper posture
print("\a")

We used an unknown object type just there and called some functions on it that we haven’t defined yet. We’ll do that in the last section!

Move the following lines to directly follow the end of the above code (directly after the 'for' loop, and right before the 'finally'):

streamer.send_data(results.draw_poses(frame), text)

fps.update()

if streamer.check_exit():
break

Creating the Posture Utility Class

Create a new file called ‘posture.py’. Define the class using the line:

class CheckPosture

Create the constructor for the class. We’ll have three instance variables: key_points, scale, and message.

def __init__(self, scale=1, key_points={}):
   self.key_points = key_points
   self.scale = scale
   self.message = ""

We used defaults for scale and key_points, in case the user doesn’t provide them. We just initialize the variable message to hold an empty string, but this will store feedback that the user can use to correct their posture. You already saw the key_points variable get set in the app.py section; this variable allows the functions in posture.py to make determinations about the user’s posture. Finally, the scale simply makes the calculations performed in posture.py either more or less sensitive when it is decreased or increased respectively.

Now we need to write some functions for posture.py. 

Create a getter and setter for the key_points, message, and scale variables:

def set_key_points(self, key_points):
   self.key_points = key_points

def get_key_points(self):
   return self.key_points

def set_message(self, message):
   self.message = message

def get_message(self):
  return self.message

def set_scale(self, scale):
   self.scale = scale

def get_scale(self):
   return self.scale

Now we need functions to actually check the posture. My bad posture habits include leaning forward toward my computer screen, slouching down in my chair, and tilting my head down to look at notes, so I defined methods for detecting these use cases. You can use the same principle of coordinate comparison to define your own custom methods, if you prefer.

First, we’ll define the method to detect leaning forward, as shown in the image below. This method works by comparing an ear and a shoulder on the same side of the body. So first it detects whether the ear and shoulder are both visible (i.e. the coordinate we want to use is not -1) on either the left or right side, and then it checks whether the shoulder’s x-coordinate is greater than the ear’s x-coordinate. 

lean_combined

def check_lean_forward(self):
if self.key_points['Left Shoulder'].x != -1 and self.key_points['Left Ear'].x != -1 \
and  self.key_points['Left Shoulder'].x >= (self.key_points['Left Ear'].x + \
(self.scale * 150)):
return False

    if self.key_points['Right Shoulder'].x != -1 and self.key_points['Right Ear'].x != -1 \
and  self.key_points['Right Shoulder'].x >= (self.key_points['Right Ear'].x + \
self.scale * 160)):
return False

return True

NOTE: the coordinates for ‘alwaysai/human-pose’ are 0,0 at the upper left corner. Also, the frame size will differ depending on whether you are using a Streamer input video or images, and this will also impact the coordinates. I developed using a Streamer object and the frame size was (720, 1280). For all of these functions, you’ll most likely need to play around with the coordinate differences, or modify the scale, as every person will have a different posture baseline. The principle of coordinate arithmetic will remain the same, however, and can be used to change app behavior in other pose estimation use cases! You could also use angles or a percent of the frame, so as to not be tied to absolute numbers. Feel free to re-work these methods and submit a pull request to the GitHub repo!

Next, we’ll define the method for slouching down in a chair, such as in the image below.

slump_comparison

In this method, we’ll use the y-coordinate neck and nose keypoints to detect when the nose gets too close to the neck, which happens when someone’s back is hunched down in a chair. For me, about 150 points was the maximum distance I wanted to allow. If my nose is less than 150 points from my neck, I want to be notified. Again, these hardcoded values can be scaled with the ‘scale’ factor or modified as suggested in the note above.

def check_slump(self):
if self.key_points['Neck'].y != -1 and self.key_points['Nose'].y != -1 \
and (self.key_points['Nose'].y >= self.key_points['Neck'].y - (self.scale * 150)):
return False

    return True

Now, we’ll define the method to detect when a head is tilted down, as shown in the image below. This method will use the ear and eye key points to detect when the y-coordinate of a given eye is closer to the bottom of the image than the ear on the same side of the body.

head_drop

def check_head_drop(self):
if self.key_points['Left Eye'].y != -1 and self.key_points['Left Ear'].y != -1 \
and self.key_points['Left Eye'].y > (self.key_points['Left Ear'].y + (self.scale * 15)):
return False

    if self.key_points['Right Eye'].y != -1 and self.key_points['Right Ear'].y != -1 \
and self.key_points['Right Eye'].y > (self.key_points['Right Ear'].y + (self.scale * 15)) :
return False

    return True

Now, we’ll just make a method that checks all the posture methods. This method works by using python’s all method, which only returns True if all iterables in a list return True. Since all of the posture methods we defined return False if the poor posture is detected, the method we define now will return False if any one of those methods returns False.

def correct_posture(self):
return all([self.check_slump(), self.check_head_drop(), self.check_lean_forward()])

And finally, we’ll build one method that returns a customized string that tells the user how they can modify their posture. This method is called in app.py and the result is displayed on the streamer’s text.

def build_message(self):
current_message = ""
    if not self.check_head_drop():
        current_message += "Lift up your head!\n"
if not self.check_lean_forward():
current_message += "Lean back!\n"
if not self.check_slump():
current_message += "Sit up in your chair, you're slumping!\n"
self.message = current_message
return current_message

That’s it! Now you have a working posture correcting app. You can customize this app by creating your own posture detection methods, using different keypoint coordinates, making the build_message return different helpful hints, and creating your own custom audio file to use instead of the 'print("\a")'. You could even alter the app to send you a text message instead of printing to the console!

If you want to run this app on a Jetson Nano, update your Dockerfile and the accelerator and engine arguments in app.py as described in this article.

Now, just start your app (visit this page if you need a refresher on how to do this for your current set up), and open your web browser to ‘localhost:5000’ to see the posture corrector in action!

For more pose estimation tutorials, including a YMCA example app, visit our blog page!

Get Started Now
We are providing professional developers with a simple and easy-to-use platform to build and deploy computer vision applications on edge devices.
Get Started Now
By Lila Mullany Jul 09, 2020