Configuration & Packaging

alwaysAI makes it easy to customize your application. You can change the engine, the accelerator, the models your app uses, add external Python packages and other dependencies. You can also package your application as a container, which can be run directly with Docker.

Changing the Engine and Accelerator

An Engine and Accelerator must be specified for the core CV services. The engine is the software backend running the model, and the accelerator is the target hardware the engine is running on. Specify the engine and accelerator in the load() function. If only the engine is provided, the accelerator listed as default for that engine will be used. You can refer to the model catalog and the supported devices page to see if your model or device is supported by an engine and accelerator.

OpenCV’s DNN Engine

OpenCV’s DNN engine (DNN) will run on all supported devices and supports most models, so it’s a great starting point. The default accelerator is GPU, which attempts to run the model on the GPU and falls back to the CPU if not. If desired, you can manually select the CPU, which runs the model on the CPU.

Set the engine parameter of your core CV service as follows to use OpenCV’s DNN engine:

cv_service.load(engine=edgeiq.Engine.DNN)

or:

cv_service.load(
    engine=edgeiq.Engine.DNN, accelerator=edgeiq.Accelerator.CPU)

OpenCV’s CUDA Engine

OpenCV’s CUDA engine (DNN_CUDA) will perform inferencing on a CUDA-supported GPU. The default accelerator is NVIDIA, and for some models an additional performance boost comes with using the NVIDIA_FP16 accelerator.

Set the engine parameter of your core CV service as follows to use OpenCV’s DNN engine:

cv_service.load(engine=edgeiq.Engine.DNN_CUDA)

NVIDIA’s TensorRT Engine

NVIDIA’s TensorRT engine (TENSOR_RT) is a high-performance model optimizer and inference engine for CUDA-supported GPU’s.

Set the engine parameter of your core CV service as follows to use OpenCV’s DNN engine:

cv_service.load(engine=edgeiq.Engine.TENSOR_RT)

OpenCV’s OpenVINO Engine for NCS2 and Myriad Processors

OpenCV’s OpenVINO engine (DNN_OPENVINO) enables inferencing on the Intel Neural Compute Stick 2 as well as other Myriad-based devices. It supports models built with the OpenVINO model optimizer as well as some other frameworks.

Set the engine parameter of your core CV service as follows to use OpenCV’s DNN engine:

cv_service.load(engine=edgeiq.Engine.DNN_OPENVINO)

Changing the Computer Vision Model

The alwaysAI model catalog provides pre-trained machine learning models that enable developers to quickly prototype a computer vision application without the need to create a custom model first. If you can’t find a model that suits your needs, you can upload a pre-trained model, or train a new model. To change the model in your application:

Update the app model dependency

Either navigate to the model catalog and find the model you want to add, or select from your personal models, and click on the model name to see details. Copy the model ID and run it with the aai app models add command in your terminal. For example, to use the MobileNet SSD model, the full command would be:

$ aai app models add alwaysai/ssd_mobilenet_v1_coco_2018_01_28

To remove your old model, run the alwaysai app models remove command:

$ aai app models remove alwaysai/mobilenet_ssd

To see the models that your app depends on, run the alwaysai app show command:

$ aai app show
Models:
  alwaysai/ssd_mobilenet_v1_coco_2018_01_28@2

Scripts:
  start => "python app.py"

For more details on adding models to your application, including using unpublished (locally trained) models, you can visit this page.

Use the model in your application

The next step to using the new model in your application is simply to pass the model ID to the constructor of the object that will be using it. Classification, ObjectDetection and PoseEstimation all take a model ID as input. Paste the model ID into your app.py file as an input parameter:

obj_detect = edgeiq.ObjectDetection("alwaysai/ssd_mobilenet_v1_coco_2018_01_28")

Run the alwaysai app install command to make sure your models are installed and the latest application code is available on your device.

If the model does not get installed or is not supported by the object you are using, you will get an error back telling you so.

Updating Application edgeIQ Version

The edgeIQ API will be frequently updated with new features, bug fixes, and enhancements. When a new version of edgeIQ is released, it is recommended that you update your applications to the latest version. The easiest way to update your edgeIQ version to the latest is to delete the Dockerfile in the app directory and run the app configure command to generate a new Dockerfile using the latest release.

$ rm Dockerfile
$ aai app configure

If you’ve customized your Dockerfile with additional commands, then the best option is to just edit the FROM line in the Dockerfile.

For example, change:

FROM alwaysai/edgeiq:${ALWAYSAI_HW}-0.15.1

to:

FROM alwaysai/edgeiq:${ALWAYSAI_HW}-0.17.1

Read the edgeIQ Release Notes to learn about the latest releases.

Handling Application Dependencies

Once you start building more complex alwaysAI applications, you’ll likely use dependencies that are not included in the edgeIQ Docker image. There are two types of dependencies that are supported:

  1. Python Dependencies: These are packages that can be installed using pip.

  2. System Dependencies: These are packages that can be installed using apt-get.

Python Dependencies

To add a Python dependency to your app, add a requirements.txt file to your app directory and add the requirement, along with the version if necessary. For example, if your app requires the Requests Python module your requirements.txt would look like this:

Requests==2.22.0

During the app install command, the dependencies are installed in a Python virtual environment for your app.

System Dependencies

To add a system dependency to your app, add additional commands to your Dockerfile. For example, if your app depends on the VLC package, your Dockerfile would look like this:

ARG ALWAYSAI_HW="default"
FROM alwaysai/edgeiq:${ALWAYSAI_HW}-<version>
RUN apt-get update && apt-get install -y vlc

During the app install command the Docker image is rebuilt using the updated Dockerfile. Your app will run in a container based on the new Docker image.

edgeIQ Runtime Environment Base Images

The edgeIQ Runtime Environment is available as Docker images from alwaysAI Docker Hub. The format of the images is as follows:

alwaysai/edgeiq:<architecture/device>-<version>

The image is built based on Debian Buster for the following architectures:

  • armv7hf

  • aarch64

  • amd64

Additionally, an image is built for NVIDIA Jetson devices with the prefix “jetson”, e.g.:

alwaysai/edgeiq:jetson-0.17.1

The latest release will be tagged with “latest” as the version. However, it is recommended to use a specific version tag and upgrade when new versions come out, since the API’s are constantly being updated and improved. Using the “latest” tag may lead to surprises when a new version is pulled down unexpectedly.

Selecting the Architecture or Device

The alwaysAI CLI takes advantage of Docker build arguments to automatically pick the right architecture or device. This is done by setting an argument before the FROM line of the Dockerfile which the CLI can overwrite:

ARG ALWAYSAI_HW="default"
FROM alwaysai/edgeiq:${ALWAYSAI_HW}-0.17.1

If you’d like to build the Dockerfile without using the CLI, just change ALWAYSAI_HW to match the architecture or name of your target device.

Packaging an App as a Docker Image

Package Your App

To build the image, you first need to install the app on the device you’d like to run the app on. Run the aai app configure CLI command to select the target device:

$ aai app configure

Next, run the aai app install CLI command to build the runtime environment and install the python dependencies and models:

$ aai app install

Build the docker image on the target device:

$ aai app package --tag <image_name>

Run Your App

If you packaged your app on a remote device, run the following command to work directly on the target device:

$ aai app shell --no-container

You should be able to see your image using the docker images command. Your output might look like this:

$ docker images
REPOSITORY       TAG      IMAGE ID      CREATED       SIZE
<image_name>     latest   e45e70a16ca0  1 minute ago  1.33GB

To run the app in a docker container, use the following command on the target device:

$ docker run --network=host --privileged -d -v /dev:/dev <image_name>
  • The --network==host flag tells docker to map the device’s network interfaces into the container. This enables access to the internet and the Streamer from outside the container.

  • The --privileged flag is needed when working with USB devices.

  • The -d flag runs the container detached from the CLI.

  • The -v /dev:/dev flag mounts the devices directory into the container so that cameras and USB devices can be accessed.

To learn more about these options, visit the Docker Run reference page.

Once your app is up and running you can manage it with docker container commands. A couple useful commands are:

  • docker container ls will give a list of all running containers.

  • docker container kill <id> will stop the container of the given ID.

Learn more about these commands at the Docker container reference page.

To access the Streamer, use the device’s hostname or IP address in your development machine’s browser. For example:

http://raspberrypi:5000

Related Tutorials