Developer stories to your inbox.

Subscribe to the Developer Digest, a monthly dose of all things code.

You may unsubscribe at any time using the unsubscribe link in the digest email. See our privacy policy for more information.

alwaysAI Ad

How to Build a License Plate Tracker with alwaysAI

By Lila Mullany Aug 12, 2020

In this tutorial, we’ll cover how to create your own license plate tracker using the new license plate detection model, which was created using alwaysAI’s model training tool.

If you want to read more about how the license plate detection model was built, read this blog! To read more about model training in general, you can visit our model training overview article.

You can find a subset of the dataset used in the creation ‘alwaysai/vehicle_license_mobilenet_ssd’ model, as well as step-by-step instructions on how to get started with model training, here.

To complete the tutorial, sign up for a free alwaysAI account and follow the instructions to set up your machine.

Visit the alwaysAI blog for more background on computer vision, developing models, how to change models, and more. The finished code from this tutorial is available on GitHub.

Let’s get started!

We'll start by getting the project set up on your location machine, then we will walk through the code in more detail.

In this tutorial, we’ll do this by creating a fresh project from the GitHub URL. After you've created your account and gotten your development machine set up, open your terminal and navigate to a directory where you would like your project to live (you can create a new folder using your file navigator GUI, or via your terminal). 

Make sure you're logged into the alwaysAI CLI by typing 

aai user login

and following the prompts to log in if you are not already. Once you're logged in, you can create your project in your desired directory by typing

aai app configure

Select 'Create a new project', and then select 'From a git repo'. Then, enter the URL for the GitHub repo associated with this project: 

https://github.com/alwaysai/license-plate-detector

Once the clone is complete, you can view all of the project files locally. To finish the setup process, type

aai app install

into the command line, and for now you can select 'Your local computer' for the destination. If you would like to set up an edge device to deploy the application to, you can read more about that here.

Finally, you can run your application with 

aai app start

And you can view your application running at 'localhost:5000'!

Now that you've seen the app in action, let's walk through it. 

At the top, we just import our necessary libraries, note that we import edgeiq, which is the alwaysAI Python API.

import time
import edgeiq

The beginning of the application is very similar to most of the other applications on the alwaysAI GitHub, we first load a core Computer Vision service instance, in this base ObjectDetection, selecting the desired model and engine.

def main():
obj_detect = edgeiq.ObjectDetection("alwaysai/vehicle_license_mobilenet_ssd")
obj_detect.load(engine=edgeiq.Engine.DNN)
print("Loaded model:\n{}\n".format(obj_detect.model_id))
print("Engine: {}".format(obj_detect.engine))
print("Accelerator: {}\n".format(obj_detect.accelerator))
print("Labels:\n{}\n".format(obj_detect.labels))


...

Next, we define the variable tracker, which is the centroid tracker object instance. This will associate an ObjectDetectionPrediction's box with an object ID, based on the distance between the box and the last prediction. We also generate an object to track the frames per second (FPS).

    tracker = edgeiq.CentroidTracker(deregister_frames=5)
fps = edgeiq.FPS()

The next part of code instantiates a 'try' block, along with a ‘finally’ counterpart, which will always be executed regardless of the ‘try’ execution. Note that the contents of the 'try' block are empty for now, in order to show the skeleton of the application; we'll cover the content that goes in that 'try' block next!

    try:
        # blank for now, will fill in in the next section!

    finally:
        fps.stop()
streamer.close()
        print("elapsed time: {:.2f}".format(fps.get_elapsed_seconds()))
        print("approx. FPS: {:.2f}".format(fps.compute_fps()))

        print("Program Ending")

Now that the configuration is done and we have a skeleton of an app to work with, we’ll fill in the object tracking and file import portions. All of the rest of the code will go into the ‘try’ block we created in the last step.

Inside the ‘try’ block, we have the following code:

        video_paths = edgeiq.list_files(base_path="./video/", valid_exts=".mp4")
streamer = edgeiq.Streamer().setup()


        for video_path in video_paths:
         with edgeiq.FileVideoStream(video_path) as video_stream:
try:
# Allow Webcam to warm up
time.sleep(2.0)
fps.start()

# loop detection
while video_stream.more():
frame = video_stream.read()
...

This code uses the edge IQ command list_files() to get a list of all the files you store in the ‘video’ folder that have the file extension ‘mp4’. Then, it iterates over each of the file paths in that returned list and runs the nested code on each, which we’ll cover in the following section. You can see that we also instantiates a Streamer instance, which can be used to inspect the results of your code in the browser: frames and text can be sent to the Streamer instance for viewing. The variable called ‘predictions’ that is defined at the beginning of the ‘while’ loop in the code above is a list that will be used to store the predictions sent to the Streamer instance. 

Next, we’ll add in the tracking logic. For every file in the ‘video’ folder, we’ll use that video as an input stream and detect and track license plates and vehicles. 

                    ...
                    predictions = []

results = obj_detect.detect_objects(
frame,
confidence_level=.5
)

# Generate text to display on streamer
text = ["Model: {}".format(obj_detect.model_id)]
text.append(
"Inference time: {:1.3f} s".format(
results.duration
)
)
text.append("Objects:")

# Update tracker results with the new predictions
objects = tracker.update(results.predictions)

if len(objects) == 0:
text.append("no predictions")
else:
# Create a new prediction list
for (object_id, prediction) in objects.items():
text.append("{}_{}: {:2.2f}%".format(
prediction.label,
object_id,
prediction.confidence * 100
)
)

new_label = '{} {}'.format(
prediction.label,
object_id
)

prediction.label = '{} {}'.format(
new_label.split(" ")[0],
object_id
)

predictions.append(prediction)

# Mark up the image and update text
frame = edgeiq.markup_image(
frame, predictions,
show_labels=True,
show_confidences=False,
colors=obj_detect.colors,
line_thickness=4,
font_size=1,
font_thickness=4
)

                        fps.update()

                        if streamer.check_exit():
                            break

except
edgeiq.NoMoreFrames:
continue

In this code, we process each frame: we run object detection on the input image, generate text to display on the Streamer instance, and then pass the results to the tracker to generate object IDs. If there are tracking results, we generate new labels, and append each result to the predictions list. Each of these predictions are then used to mark up the image, which is also sent to the Streamer instance.

That’s it! 

alwaysAI platform

Check out the quickstart guide and our documentation on running training if you want to get started with model training. You can use either our freely available, ready-made datasets, found here, or a larger version here,  to build your own model that you can test out using the example app you've just built!

Contributions to the article made by Todd Gleed and Jason Koo

Get Started Now
We are providing professional developers with a simple and easy-to-use platform to build and deploy computer vision applications on edge devices.
Get Started Now
By Lila Mullany Aug 12, 2020