FAQ

How do I change my application model?

To change the model that an application makes use of, you must refer to that model in the application code, as well as add the new model to the application’s configuration file (always.app.json).

alwaysAI’s middleware, EdgeIQ, takes Model ID and Category as arguments. In the alwaysAI realtime object detection sample app this looks like:

obj_detect = edgeiq.ObjectDetection(“alwaysai/MobileNetSSD”)

Where alwaysai/MobileNetSSD is the Model ID and ObjectDetection is the Category.

To change the model in this app for another classification model, simply replace the Model ID.

To add the new model to alwaysai.app.json, at the command line run:

aai app models add <Model ID>

Which will confirm that the model has been added to the config file. When the app is deployed via aai app deploy the new model will be automatically installed.

How do I use my own custom model?

You can upload a custom model to your account via the alwaysAI model catalog on the web or the command line. Both are interactive and have help available.

Models can be public or private. Public models are available to the broader alwaysAI community and require additional information as they are listed in our model catalog. Private models are only available to your account.

Once a model has been successfully added, it is immediately available for use just like any other model in the alwaysAI catalog.

How do I customize my application?

If you know Python you can build or customize an application on the alwaysAI platform. While the most common customizations are changing models and video sources, you are really only limited by your imagination.

EdgeIQ services are well-documented and simplify many common (and complex) computer vision activities including basic manipulation of images and video. You can combine EdgeIQ services with your own Python code, and of course we are adding new services all the time.

The alwaysAI command line interface (CLI) is designed to support host-based development. With the CLI you can list key app dependencies and the target, including edge devices, on which you want the app to run. Then with a simple aai app start command the application will execute.

What devices can I use?

alwaysAI supports Docker images on ARM32, ARM64, and x86 architectures. From a practical perspective this means if you run Linux on your Snapdragon, Jetson, Raspberry Pi, or other embedded computer, you can use alwaysAI.

The alwaysAI stack is just under 1GB and most models in our catalog are under 100MB, with many specifically optimized for resource-constrained environments. Our model catalog provides model size and performance against a common benchmark platform for each model so that you can choose models that work for your situation.

Finally, we take advantage of hardware accelerators wherever possible. Our website includes a list of specific accelerators that we support; if you have a platform we should be supporting please let us know!

How can I get help or ask questions?

All members of the beta program are invited to join the alwaysAI Community on Slack. The alwaysAI team will respond during business hours, and we hope that all of you will participate in the conversations about features, issues, and what you create with the software.

Check your email for an invite from the alwaysAI Community on Slack. If you can’t find your invitation, contact Nichole.

Can I invite friends or colleagues to join the beta?

Absolutely. You’re welcome to share a link to the beta application page: learn.alwaysai.co/beta