Configuration & Packaging¶
alwaysAI makes it easy to customize your application. You can change the engine, the accelerator, the models your app uses, add external Python packages and other dependencies. You can also package your application as a container, which can be run directly with Docker.
Change the Engine and Accelerator¶
An Engine
and Accelerator
must be specified for the core
CV services. The engine is the software backend running the model, and the
accelerator is the target hardware the engine is running on. Specify
the engine and accelerator in the load()
function. If only the engine is
provided, the accelerator listed as default for that engine will be used.
You can refer to the alwaysAI Model Catalog and the
Supported Devices
page to see if your model or device is supported by an engine and accelerator.
OpenCV’s DNN Engine¶
OpenCV’s DNN engine (DNN
) will run on all
supported devices and supports most models, so it’s a great starting point.
The default accelerator is GPU
,
which attempts to run the model on the GPU and falls back to the CPU if not.
If desired, you can manually select the
CPU
, which runs the model on the CPU.
Set the engine
parameter of your core CV service as follows to use
OpenCV’s DNN engine:
cv_service.load(engine=edgeiq.Engine.DNN)
or:
cv_service.load(
engine=edgeiq.Engine.DNN, accelerator=edgeiq.Accelerator.CPU)
OpenCV’s CUDA Engine¶
OpenCV’s CUDA engine (DNN_CUDA
) will perform
inferencing on a CUDA-supported GPU. The default accelerator is
NVIDIA
, and for some models an
additional performance boost comes with using the
NVIDIA_FP16
accelerator.
Set the engine
parameter of your core CV service as follows to use
OpenCV’s DNN engine:
cv_service.load(engine=edgeiq.Engine.DNN_CUDA)
NVIDIA’s TensorRT Engine¶
NVIDIA’s TensorRT engine (TENSOR_RT
) is a
high-performance model optimizer and inference engine for CUDA-supported
GPU’s.
Set the engine
parameter of your core CV service as follows to use
OpenCV’s DNN engine:
cv_service.load(engine=edgeiq.Engine.TENSOR_RT)
Hailo’s HailoRT Engine¶
Hailo’s RT engine (HAILO_RT
) is a
high-performance model optimizer and inference engine for Hailo’s accelerators.
Set the engine
parameter of your core CV service as follows to use
Hailo’s HailoRT engine:
cv_service.load(engine=edgeiq.Engine.HAILO_RT)
HailoRT libraries should be installed in the host machine to run the application on Hailo accelerator. Installing HailoRT libraries requires an additional installation procedure. Running the command below in a linux (amd64) machine shall install the Hailo RT firmware and Hailo PCIE Drivers. in the host machine.
aai hailo install-pcie-driver
To uninstall the Hailo RT Firmware and Hailo PCIE Driver, run the command below.
aai hailo uninstall-pcie-driver
OpenCV’s OpenVINO Engine for NCS2 and Myriad Processors¶
OpenCV’s OpenVINO engine (DNN_OPENVINO
)
enables inferencing on the Intel Neural Compute Stick 2 as well as other
Myriad-based devices. It supports models built with the OpenVINO model
optimizer as well as some other frameworks.
Set the engine
parameter of your core CV service as follows to use
OpenCV’s DNN engine:
cv_service.load(engine=edgeiq.Engine.DNN_OPENVINO)
Change the Computer Vision Model¶
The alwaysAI model catalog provides pre-trained machine learning models that enable developers to quickly prototype a computer vision application without the need to create a custom model first. If you can’t find a model that suits your needs, you can upload a pre-trained model, or train a new model. To change the model in your application:
Update the app model dependency¶
Either navigate to the alwaysAI Model Catalog
and find the model you want to add, or select from your personal models,
and click on the model name to see details.
Copy the model ID and run it with the aai app models add
command in your
terminal. For example, to use the MobileNet SSD model, the full command
would be:
$ aai app models add alwaysai/ssd_mobilenet_v1_coco_2018_01_28
To remove your old model, run the alwaysai app models remove
command:
$ aai app models remove alwaysai/mobilenet_ssd
To see the models that your app depends on, run the alwaysai app models show
command:
$ aai app models show
Models:
alwaysai/ssd_mobilenet_v1_coco_2018_01_28@2
For more details on adding models to your application, including using unpublished (locally trained) models, you can visit this page.
Use the model in your application¶
The next step to using the new model in your application is simply to pass
the model ID to the constructor of the object that will be using it.
Classification
, ObjectDetection
and
PoseEstimation
all take a model ID as input. Paste the model ID
into your app.py file as an input parameter:
obj_detect = edgeiq.ObjectDetection("alwaysai/ssd_mobilenet_v1_coco_2018_01_28")
Run the alwaysai app install
command to make sure your models are
installed and the latest application code is available on your device.
If the model does not get installed or is not supported by the object you are using, you will get an error back telling you so.
Update Application edgeIQ Version¶
The edgeIQ API will be frequently updated with new features, bug fixes, and
enhancements. When a new version of edgeIQ is released, it is recommended that
you update your applications to the latest version. The easiest way to update
your edgeIQ version to the latest is to delete the Dockerfile in the app
directory and run the app configure
command to generate a new Dockerfile
using the latest release.
$ rm Dockerfile
$ aai app configure
If you’ve customized your Dockerfile with additional commands, then the best
option is to just edit the FROM
line in the Dockerfile.
For example, change:
FROM alwaysai/edgeiq:${ALWAYSAI_HW}-0.15.1
to:
FROM alwaysai/edgeiq:${ALWAYSAI_HW}-0.17.1
Read the edgeIQ Release Notes to learn about the latest releases.
Once you’ve changed your Dockerfile, you must run aai app install
for the changes to take effect.
Handle Application Dependencies¶
Once you start building more complex alwaysAI applications, you’ll likely use dependencies that are not included in the edgeIQ Docker image. There are two types of dependencies that are supported:
Python Dependencies: These are packages that can be installed using
pip
.System Dependencies: These are packages that can be installed using
apt-get
.
Python Dependencies¶
To add a Python dependency to your app, add a requirements.txt file to your app directory and add the requirement, along with the version if necessary. For example, if your app requires the Requests Python module your requirements.txt would look like this:
Requests==2.22.0
During the app install command, the dependencies are installed in a Python virtual environment for your app.
System Dependencies¶
To add a system dependency to your app, add additional commands to your Dockerfile. For example, if your app depends on the VLC package, your Dockerfile would look like this:
ARG ALWAYSAI_HW="default"
FROM alwaysai/edgeiq:${ALWAYSAI_HW}-<version>
RUN apt-get update && apt-get install -y vlc
During the app install command the Docker image is rebuilt using the updated Dockerfile. Your app will run in a container based on the new Docker image.
edgeIQ Runtime Environment Base Images¶
The edgeIQ Runtime Environment is available as Docker images from alwaysAI Docker Hub. The format of the images is as follows:
alwaysai/edgeiq:<architecture/device>-<version>
The image is built based on Debian Buster for the following architectures:
armv7hf
aarch64
amd64
Additionally, an image is built for NVIDIA Jetson devices with the prefix “jetson”, e.g.:
alwaysai/edgeiq:jetson-0.17.1
The latest release will be tagged with “latest” as the version. However, it is recommended to use a specific version tag and upgrade when new versions come out, since the API’s are constantly being updated and improved. Using the “latest” tag may lead to surprises when a new version is pulled down unexpectedly.
Select the Architecture or Device¶
The alwaysAI CLI takes advantage of Docker build arguments to automatically pick the right architecture or device. This is done by setting an argument before the FROM line of the Dockerfile which the CLI can overwrite:
ARG ALWAYSAI_HW="default"
FROM alwaysai/edgeiq:${ALWAYSAI_HW}-0.17.1
If you’d like to build the Dockerfile without using the CLI, just change ALWAYSAI_HW
to match the architecture or name of your target device.
Production Mode¶
Production Mode supports stand-alone, field-deployed applications via long-lived authentication tokens. Devices in Production Mode do not need to be refreshed on a regular basis and can auto-start with the support of the aai app deploy
CLI command.
Note
This feature is only available for Basic or Premium users. Upgrade your account on the Dashboard.
Put a device in Production Mode¶
To put an already-configured device in Production Mode for a specific project you must re-configure it as new device. To do this, first delete the device from the project via your web account. You can then run:
aai app configure
from within the project’s directory on your host computer and choose “production” as an option when prompted.
For devices that have not already been configured, simply configure as you would normally, choosing the “production” option when prompted.
Take a device out of Production Mode¶
To take a device out of Production Mode, you must delete it and then re-configure it as a new development device. To do this follow the same procedure as putting a device into Production Mode, except when promoted during the configuration step choose “development”.
Deploy to production¶
Stand-alone production containers are specific to a device class. The alwaysAI CLI provides a straightforward way to build containers for the specific device or devices your are using, and then to deploy the stand-alone container with your application to the device.
To deploy a stand-alone production application to a specific device, start by running:
aai app configure
in your project directory, and choose the device on which you want to deploy. This can be a new device or one you have already configured for Production Mode (see above for instructions on how to do this).
Production Mode takes advantage of Docker Compose so you will need to install it on your edge device before going any farther. To do this SSH into the device and run:
pip3 install docker-compose
To ensure Docker Compose is in your path, while still SSH’d into the device you can run:
sudo ln -s /home/${USER}/.local/bin/docker-compose /usr/local/bin/docker-compose
Next, exit the SSH session and run:
aai app install
Note, this may take some time depending on your device.
Finally, run:
aai app deploy
Your application is now ready for field deployment. You can open the alwaysAI Streamer at http://<hostname or IP address>:5000
, and your application will auto-start if you power cycle the device.
Since the app is running as a detached container, the logs won’t automatically show up in your terminal. To view the app’s logs, run:
aai app deploy --logs
To stop your app, run:
aai app deploy --stop
The aai app deploy
command will create a docker-compose.yaml
file if one doesn’t exist, with parameters specific to your target device. You can edit this file to change the configuration of your app, and running aai app install
after your edits will install the new configurations on the device. Learn more on the Docker Compose reference page.
Publish analytics to the cloud¶
alwaysAI enables you to send analytics to the cloud when running on production mode devices. To use analytics, you must create a new production device, as described above, using aai app configure. Use this device in any projects using analytics.
To enable analytics, add the following section to your alwaysai.app.json
file:
"analytics": {
"enable_cloud_publish": true
}
In your app.py
file, make sure to make the appropriate call to publish_analytics() for the type of results you want to publish to the cloud (see the following section for more details).
The following sections describe how to publish results from your application. Once analytics are streaming to the cloud, you can read the data stream by accessing our secure web API. The analytics endpoint is a standard Web Socket endpoint located at wss://analytics.alwaysai.co?projectId=[project-id]
which you can connect to using common web libraries. To connect you will need an apiKey which you can get from your alwaysAI administrator.
There are 2 different ways to set the apiKey:
In the query parameters:
wscat -c "wss://analytics.alwaysai.co?projectId=[project-id]&apiKey=[api-key]"
In the header:
wscat -H "x-api-key: [api-key]" -c "wss://analytics.alwaysai.co?projectId=[project-id]"
Object Detection
Analytics are published using the publish_analytics()
function:
obj_detect = edgeiq.ObjectDetection(
'alwaysai/ssd_mobilenet_v1_coco_2018_01_28')
obj_detect.load(engine=edgeiq.Engine.DNN)
<get image>
results = obj_detect.detect_objects(image, confidence_level=.5)
obj_detect.publish_analytics(results, tag='my_tag')
The data is streamed as stringified JSON objects with the following format:
{
"timestamp": "<timestamp>",
"device_id": "<device_id>",
"type": "ObjectDetectionResults",
"base_service": "ObjectDetection",
"tag": "<tag>",
"results": {
"predictions": [
{
"box": {
"start_x": 120, "start_y": 349, "end_x": 454, "end_y": 599, "center_x": 287.0, "center_y": 474.0
},
"confidence": 0.9988538026809692,
"label": "tvmonitor",
"index": 20
},
...
],
"duration": 0.016431093215942383,
"model_id": "alwaysai/mobilenet_ssd"
}
}
Object Tracking
For object tracking, results are published using the original core CV service used to generate the results. For example, if Object Detection was the input to the tracker, use publish_analytics()
from Object Detection to publish the results:
obj_detect = edgeiq.ObjectDetection(
'alwaysai/ssd_mobilenet_v1_coco_2018_01_28')
obj_detect.load(engine=edgeiq.Engine.DNN)
tracker = edgeiq.KalmanTracker()
<get image>
results = obj_detect.detect_objects(image, confidence_level=.5)
tracked_objects = tracker.update(results.predictions)
obj_detect.publish_analytics(tracked_objects, tag='my_tag')
The data is streamed as stringified JSON objects with the following format:
{
"timestamp": "<timestamp>",
"device_id": "<device_id>",
"type": "TrackingResults",
"base_service": "ObjectDetection",
"tag": "<tag>",
"results": {
"objects": [
{
"id": 0,
"ObjectDetectionPrediction": {
"box": {
"start_x": 110, "start_y": 408, "end_x": 458, "end_y": 667, "center_x": 284.0, "center_y": 537.5
},
"confidence": 0.9970658421516418,
"label": "tvmonitor",
"index": 20,
"estimate": [284.0, 537.5]
}
},
...
],
"tracking_algorithm": "KalmanTracker",
"model_id": "alwaysai/mobilenet_ssd"
}
}
Custom Analytics
The publish_analytics()
function enables publishing custom data to the analytics data stream. The input is any JSON-serializable object. For example:
custom_analytics = {
"cpu_temp": cpu_temp,
"fan_speed": fan_speed
}
edgeiq.publish_analytics(custom_analytics)
The above analytics would be streamed with the following format:
{
"timestamp": "<timestamp>",
"device_id": "<device_id>",
"type": "CustomEvent",
"base_service": null,
"tag": "<tag>",
"results": {
"cpu_temp": <cpu_temp>,
"fan_speed": <fan_speed>
}
}
Build a stand-alone Docker image suitable for DockerHub¶
If you would like to build a production image for a device (or device class) but deploy via DockerHub or similar repository, you can use the aai app package
command.
The process is similar to using aai app deploy
. Start by running:
aai app configure
in the project you with to package, then run:
aai app install
Finally, run:
aai app package
Your stand-alone production docker image is ready but is not yet running on your device. Run the app using docker commands, and push to Docker Hub to easily pull and run on other devices! Note that the image will only work on devices of the same architecture, and aai app configure
must be run for each device you’d like to run the app on.
Run your stand-alone Docker image¶
If you packaged your app on a remote device, run the following command to work directly on the target device:
$ aai app shell --no-container
You should be able to see your image using the docker images
command. Your output might look like this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<image_name> latest e45e70a16ca0 1 minute ago 1.33GB
To run the app in a docker container, use the following command on the target device:
$ docker run --network=host --privileged -d -v /dev:/dev -v ~/.config/alwaysai:/root/.config/alwaysai <image_name>
The
--network==host
flag tells docker to map the device’s network interfaces into the container. This enables access to the internet and the Streamer from outside the container.The
--privileged
flag is needed when working with USB devices.The
-d
flag runs the container detached from the CLI.The
-v /dev:/dev
flag mounts the devices directory into the container so that cameras and USB devices can be accessed.The
-v ~/.config/alwaysai:/root/.config/alwaysai
flag mounts the credentials directory into the container so edgeIQ can authenticate.
For NVIDIA Jetson devices, you’ll also need the following options:
--runtime=nvidia
ensures the NVIDIA drivers are loaded.--ipc=host
is required when usingJetsonVideoStream
.--volume /tmp/argus_socket:/tmp/argus_socket
is required when usingJetsonVideoStream
.
To learn more about these options, visit the Docker Run reference page.
Once your app is up and running you can manage it with docker container
commands. A couple useful commands are:
docker container ls
will give a list of all running containers.docker container kill <id>
will stop the container of the given ID.
Learn more about these commands at the Docker container reference page.
You can open the alwaysAI Streamer at http://<hostname or IP address>:5000
.
Related Tutorials