How to Get Started with the NVIDIA Jetson TX2 on alwaysAI

By Taiga Ishida • May 06, 2020

The Jetson TX2 is part of NVIDIA’s line of embedded AI modules enabling super fast computation on the edge. The TX2 is a leg up compared to the Nano and will give you faster inferencing times in your AI applications. In fact, the Jetson TX2 is the fastest, most power-efficient embedded AI computing device. This 7.5 watt supercomputer on a module brings true AI computing at the edge. 

Please note: This setup guide can only be followed if you have a Linux computer. VM support is un-verified.

What you will need:


How To Flash The Device

  1. Configure the Jetson TX2.

  2. Use NVIDIA SDK Manager to Flash the Device.

Configuring the Jetson TX2

  • Take the USB Micro-B to USB A cable included in the developer kit and connect your Jetson TX2 to the Linux Computer.

  • Connect a Monitor, Keyboard and Mouse.

  • Take the AC adapter included in the developer kit and connect your TX2 to an outlet.

  • Put the TX2 into Force Recovery Mode, steps listed below.

TX2 Force Recovery Mode

Starting with the device powered off:

  1. Press and hold down the Force Recovery button.

  2. Press and hold down the Power button.

  3. Release the Power button, then release the Force Recovery button.

Flashing the Device

  1. Open up NVIDIA SDK Manager

  2. Login using your credential from

  3. Step 01. Configure you’re settings to match the picture below.

Screenshot of the device flashing JetsonTX2

4. Step 02. Accept license and continue to step 03.

Screenshot of the device flashing. accept license of the Jetson TX2

5. Step 03. Enter your password and wait for components to finish downloading.

Screenshot of the device flashing. Password of the Jetson TX2

Screenshot of the device flashing. target component of the Jetson TX2

6. When a pop up opens, choose manual set up and press Flash.

Screenshot of the Jetson TX2 device flashing manual

7. Once the flashing is complete, keep an eye on the monitor connected to the TX2. A prompt will open up for the initial setup.

8. After you are done with the initial setup on the TX2, come back to SDK Manger and fill in the credentials to install SDK components.

Screenshot of the device flashing manager and and credential of the Jetson TX2

9. After the process is complete, the TX2 is set up to run alwaysAI applications.

Running alwaysAI Applications on TX2


Screenshot of the alwaysAI applications of the Jetson TX2

Using the alwaysAI CLI we can download the starter apps to get an app running quickly on the TX2.

Screenshot of the alwaysAI get starter applications of the JetsonTX2

We offer two starter apps specifically for Nvidia. For this guide we will be using nvidia_autonomous_vehicle_semantic_segmentation.

Screenshot of two starter apps specifically for Nvidia and Jetson TX2

We need to change the Dockerfile to reference the runtime container for TX2.

Screensht of the dockerfile of the Jetson TX2

Now we can run the application.

Screen Shot 2020-05-06 at 11.56.34 AM-1

Click the link or enter http://localhost:5000 in a web browser to view our application.

Screenshot of the alwaysAI output with the Jetson TX2

Now you are set up for super fast inferencing on the edge with alwaysAI and the NVIDIA Jetson TX2!

By Taiga Ishida • May 06, 2020




Developer stories to your inbox.

Subscribe to the Developer Digest, a monthly dose of all things code.

You may unsubscribe at any time using the unsubscribe link in the digest email. See our privacy policy for more information.

alwaysAI Ad
stylized image of a computer chip

Sign up today and start your project

We can't wait to see what you'll build!