alwaysAI Now Supports the Popular OAK Cameras
alwaysAI now offers full support for the OAK cameras, including on-device inferencing, spatial AI, and custom model creation. Developers can now train a computer vision model using the Model Training Toolkit, build custom applications, and deploy it on OAK hardware.
Created by Luxonics, OAK hardware enables developers to capture images and perform machine learning inferencing (classification, object detection, and pose estimation) on the camera device.
Spatial AI and Oak-D
Spatial artificial intelligence (AI) is the capacity of an artificial intelligence framework to make predictions not only on the object it is looking at but also on how far away things are from the camera and each other. The OAK-D camera utilizes its 12 MP RGB camera for deep neural inference and a stereo camera for depth estimation. Models can be processed directly on the camera without the need for an external processor.
What is a Stereo Camera:
Much like human sight, which requires a pair of eyes to detect depth, stereo cameras are two identical cameras separated by a fixed distance that captures data from the right and left cameras to produce a 3D map of the world. Stereo cameras can be used in combination with computer vision to perceive the depth of an environment or how far items are from the camera and each other.
Use cases for depth cameras are:
- Navigation and Mapping - Where is my robot, and how do I move it around
- Collision Avoidance - Avoid hitting objects with my robot
- Scene Understanding - Where are the objects, and how are they moving in relation to my robot and each other
- Object Manipulation - Have a robot perform a task like grasp an object
How to Use alwaysAI with the OAK Hardware
- Convert the model
When training a custom model using the alwaysAI Model Training Toolkit, you’ll need to convert the model to ensure it is compatible with the OAK hardware. To convert your model for the OAK cameras, use the following command.
$ aai model convert <username/modelname> --format oak --output_id <new_modelname>
- Run the application using the OAK context manager on the edgeIQ streamer with the following command
with edgeiq.Oak('<Enter Model Name Here>',
video_mode=edgeiq.VideoMode.preview) as oak_camera, \
edgeiq.Streamer() as streamer:
Note- currently OAK supports the following sensors
- res_1080= SensorResolution.THE_1080_P
- res_4k= SensorResolution.THE_4_K
- res_12_mp= SensorResolution.THE_12_MP
- Test for results using the following commands
frame = oak_camera.get_frame()
results = oak_camera.get_model_result(confidence_level=.6)
It is also important to note that at this time, the only call type available for OAK hardware is “get_model_data()” and is non-blocking; this returns None when new model data is not available. This means that typical models run a lot slower than the OAK camera, so it’s normal for the frames to come back faster than the model results. If we block, “get_model_data()” will wait till new model data is available, then return results. This blocking function will is planned for a future release of edgeIQ.
Running Models on Oak cameras with alwaysAI:
The Oak Camera can be connected to either USB 3 or USB 2 hub; if you’re using USB 2, make sure you adjust the API to indicate its usage for applications to work (defaults for the API is USB 3)
Use the alwaysAI CLI to build and start these applications:
Configure: aai app configure
Build: aai app install
Run: aai app start