Setting up Local Model API

December 30, 2021

Quickstart

git clone https://github.com/zhiva-ai/Lung-Segmentation-API.git

docker-compose up

Requirements

Before we start please make sure your server has access to:

Get the server code

You can either clone this repo

git clone https://github.com/zhiva-ai/Lung-Segmentation-API.git

or download it directly from ZhivaAI Local Model API.

Build the server

docker-compose up

Your model API should be available at:

localhost:8011

If you want to run the server on different port then modify posts mapping inside docker-compose.yaml.

Use my own model

Model API receives an array of instances. Each instance is encoded as array of bytes and parsed into DICOM Instance with pydicom library.

This happens at L31 endpoint definition. You don't have to worry about handling DICOM data. It is covered by model-proxy and your API gets prepared data directly.

Inference point

Your model should receive a list of DICOM SOP Instances. and the example implementation is available here: https://github.com/zhiva-ai/Lung-Segmentation-API/blob/5a3863bc587f956cf6920e8a466b3bbc16983c2d/app/segmentation/lungs_segmentation_inference.py#L39.

If you have your own model then replace the invocation at L42 of endpoint definition with it.