InferenceServer

class HttpInferenceServer(host, port, purpose)

Start an HTTP inference server

Typical usage:

server = HttpInferenceServer('0.0.0.0', '5002',
                             'ObjectDetection')
try:
    server.start()
finally:
    server.close()
Parameters
  • host (str) – The IP address or hostname of the server.

  • port (int) – The port of the server.

  • purpose (str) – Purpose of the inference Server. Supported values: [‘ObjectDetection’]

start()

Start the inference server.

close()

Close the inference server.

class ObjectDetectionHttpClient(model_id, server_url)

Analyze and discover objects within an image by processing them on a hosted edgeiq.inference_server.HttpInferenceServer.

Typical usage:

obj_detect = edgeiq.ObjectDetectionHttpClient(
        'alwaysai/ssd_mobilenet_v1_coco_2018_01_28',
        'http://localhost:5002')
obj_detect.load(engine=edgeiq.Engine.DNN)

<get image>
results = obj_detect.detect_objects(image, confidence_level=.5)
image = edgeiq.markup_image(
        image, results.predictions, colors=obj_detect.colors)

for prediction in results.predictions:
        text.append("{}: {:2.2f}%".format(
            prediction.label, prediction.confidence * 100))
Parameters
  • model_id (str) – The ID of the model you want to use for object detection.

  • server_url (str) – URL for connecting to the hosted HttpInferenceServer where the inferences performed.

load(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)

Load the model to an engine and accelerator.

Parameters
  • engine (Engine) – The engine to load the model to

  • accelerator (Accelerator) – The accelerator to load the model to

detect_objects(image, confidence_level=0.3, overlap_threshold=0.3)

Perform Object Detection on an image by sending it to the HttpInferenceServer

Parameters
  • image (ndarray) – The image to analyze.

  • confidence_level (float) – The minimum confidence level required to successfully accept a detection.

  • overlap_threshold (float) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.

Return type

ObjectDetectionResults

detect_objects_batch(images, confidence_level=0.3, overlap_threshold=0.3)

Perform Object Detection on a list of images by sending them to the HttpInferenceServer

Parameters
  • images (List[ndarray]) – The list of images to analyze.

  • confidence_level (float) – The minimum confidence level required to successfully accept a detection.

  • overlap_threshold (float) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.

Return type

List[ObjectDetectionResults]

publish_analytics(results, tag=None)

Publish Object Detection results to the alwaysAI Analytics Service

Parameters
  • results (ObjectDetectionResults) – The results to publish.

  • tag (Optional[Any]) – Additional information to assist in querying and visualizations.

Raises

ConnectionBlockedError when using connection to the alwaysAI Device Agent and resources are at capacity,

Raises

PacketRateError when publish rate exceeds current limit,

Raises

PacketSizeError when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.