How can I get help or ask questions?

You are welcome to join our Discord server, check out our support page, explore the tutorials on our blog, or reach out to us at contact@alwaysai.co.

I’m confused: with alwaysAI, do I code on my PC or Mac, on my edge device, or online?

The alwaysAI CLI helps you write your code anywhere, and execute it on your local machine or edge device. Most of our developers write code on their laptops and then execute the applications on edge devices. But you can also write the code on the edge device itself, or run the whole thing on your Mac, Windows, or Linux laptop.

What is an edge device?

When we refer to an edge device, we’re thinking of a single board computer (SBC) such as a Raspberry Pi or Jetson Nano. This type of system is usually deployed for embedded use cases where it runs a specific application in a remote or standalone situation, and is often low power with limited resources compared to a more general purpose computer.

What edge devices can I use?

Your edge device needs to be compatible with Docker, so ARM32, ARM64 and x86 devices running Linux (including the Raspberry Pi 4, NVIDIA Jetson, ASUS Tinker Board, and systems based on Qualcomm’s Snapdragon processors) can all be used with alwaysAI. If you want to run an application that uses a real-time video feed, your device will also need to be camera-enabled.

Do I need an edge device to use the alwaysAI platform?

Not at all — you can execute your application on your laptop running Mac, Windows, or Linux.

I’m struggling to get my Raspberry Pi configured. Can you help?

Yes! Simply download our modified Raspberry Pi OS image for the Raspberry Pi 3B+ (and later), which includes everything you need to start using alwaysAI.

What is a model?

Deep learning models, sometimes called “networks,” are the heart of a modern computer vision system. They take in image/video information, perform an inference and then output results. Each alwaysAI application uses a specific model to return information about the contents of a video source.

How do I change the computer vision models in my application?

Just follow these instructions to change the model(s) used in your application.

How do I use my own custom model?

You can use a custom model by uploading it on alwaysAI, where it will be stored as a private model, meaning that it will only be available to your account. Once a model has been successfully added, it is immediately available for use just like any other model in the alwaysAI catalog.

You can upload your custom model through by following these instructions.

How do I choose a model for my project?

Every model in the alwaysAI model catalog is categorized according to its purpose: image classification, object detection, pose estimation, etc. Once you’ve chosen a specific category, you can read the list of labels associated with each model to see what a certain model has been trained to recognize: e.g., dogs, potted plants, bikes, etc. These can be general (e.g., “dog”) or specific (e.g., “golden retriever”) depending on how the model was trained.

Our model catalog provides additional information about each model, including performance and model size. Performance is a relative measure of how quickly a model returns predictions. It is hardware-specific, so use it as a guide rather than an absolute. Model size is the amount of memory in MB that a model requires, which can be useful for edge environments where storage is limited. Changing models in an application is quick and easy, so after narrowing down a few models that look good for your specific application, you can experiment and find the best choice.

I’m not seeing a model that will work for my application. Can you help?

Yes — you can train your own object detection model! Check out our model training documentation to get started.

What types of models can we train with the alwaysAI CLI?

You can train object detection models. There are three frameworks developers can train from, namely, mobilenet_ssd, yolov3 and Faster R-CNN. Watch our model training hacky hour and the discussion on model types for more information.

What is the difference between image classification and object detection?

Image classification returns the dominant object or objects in an image (it can tell you whether a specific thing is present), whereas object detection identifies things in an image or video stream and also locates them in the frame.

Do you have a facial detection model?

Yes, we do. However, this differs from facial recognition. Facial detection can tell you whether a human face is detected in a video or image, while facial recognition can identify a particular person.

Can I use multiple models in a single application?

Yes, you can create multiple object detection instances; here is a blog that uses two object detection models. This blog outlines using a classification model with an object detection model.

Can I use multiple camera input streams in a single application?

Yes, although most edge devices can only handle a single camera without performance suffering. If you want to read from multiple video streams or cameras, simple create another VideoStream object and read in a frame. You can send both streams as output to the Streamer by concatentating the two frames together, and sending that to the call to send_data.

Note: You'll have to import numpy to concatenate the frames!
    import numpy as np
    frame = np.concatenate((frame1, frame1))

Can I customize the streamer?

The Streamer is not customizable at the moment. However, you can build your own and customize it as you’d like. Find example code on GitHub; both the Image Capture Dashboard and the Video Streamer are examples of customized streamers.

Why might someone want to convert a model into the TensorRT format from mobilnet_ssd?

If you are planning on running your model on an NVIDIA device, you can convert the model to a TensorRT model to optimize the performance. You can contact us to convert your model to TensorRT format.

Does alwaysAI support the Google Coral Accelerator?

Not at this time. We support Myriad devices and NVIDIA Jetson devices for inferencing.

What is the recommendation to improve accuracy for a CV model?

The first step to improving the accuracy of the model is improving the dataset. The best way to do this is to evaluate your production environment and iterate the dataset through the model training process to fit the given production environment best.

Does alwaysAI support Keras?

Currently we do not support Keras models. Contact us on Discord if you have a Keras model you would like to use with alwaysAI.

What devices support TensorRT?

You can use TensorRT models on NVIDIA Jetson devices.

Is it possible to train a Yolo model?

Yes, as of March 2021, it is possible to train a Yolo model with the Model Training Toolkit.

Can you use the CLI directly on the Jetson Nano?

The CLI works cross-platform, so you can install it on the Nano (or other compatible edge device) using the Linux setup guide. You can also deploy to your Nano, meaning you can do your work on a development machine (Windows, Mac or Linux laptop), then run the application on the Nano.

Note: While you can run the CLI directly on an edge device, we do not advise attempting model retraining or dataset annotation on non-development machines.

How can I interact with my computer vision application via a custom GUI web interface?

We have a number of applications on our GitHub that you can use to make your own custom streamer. You can also watch our hacky hour on How to Build an Interactive Web Application with alwaysAI and our following office hour.

Do I need to install Docker to use alwaysAI?

Docker is needed for applications running on an edge device; Docker comes in the images we provide the Raspberry Pi and in JetPack, so you should not need to install it separately if using those images. If you are using an edge device in production mode, i.e. running aai app deploy, you will need to install Docker Compose on your edge device. If you would like to use the model training toolkit to train a custom model, you will need to install either Docker Desktop, if you are on Windows or Mac, or Docker Compose, if you are either on Linux or prefer it to Docker Desktop.

Can a single edge device such as a Jetson Nano or Raspberry Pi support multiple cameras?

The recommendation is to use one camera per edge device. However, we have seen up to two cameras supported by a single edge device. Contact us on our Discord server to learn more.

What edge device do you recommend alwaysAI users start with?

The Raspberry Pi is an inexpensive way to get started, however the NVIDIA Jetson Nano 2GB is only slightly more expensive and will give you access to a GPU. The 4GB NVIDIA Jetson Nano will offer you a bit more memory. For deployments that need more processing power, you may need a device like the NVIDIA Jetson Xavier NX.