mdai package

Submodules

mdai.client module

class mdai.client.AnnotationsImportManager(annotations=None, project_id=None, dataset_id=None, model_id=None, session=None, domain=None, headers=None)[source]

Bases: object

Manager for importing annotations.

create_job()[source]

Create annotations import job through MD.ai API. This is an async operation. Status code of 202 indicates successful creation of job.

wait_until_ready()[source]
class mdai.client.Client(domain='public.md.ai', access_token=None)[source]

Bases: object

Client for communicating with MD.ai backend API. Communication is via user access tokens (in MD.ai Hub, Settings -> User Access Tokens).

import_annotations(annotations, project_id, dataset_id, model_id=None, chunk_size=100000)[source]

Import annotations into project. For example, this method can be used to load machine learning model results into project as annotations, or quickly populate metadata labels.

Arguments:

project_id: hash ID of project. dataset_id: hash ID of machine learning model. model_id: hash ID of machine learning model. annotations: list of annotations to load. chunk_size: number of annotations to load as a chunk.

load_model_annotations()[source]

Deprecated method: use import_annotations instead.

project(project_id, path='.', force_download=False, annotations_only=False)[source]

Initializes Project class given project id.

Arguments:

project_id: hash ID of project. path: directory used for data.

class mdai.client.ProjectDataManager(data_type, domain=None, project_id=None, path='.', session=None, headers=None, force_download=False)[source]

Bases: object

Manager for project data exports and downloads.

create_data_export_job()[source]

Create data export job through MD.ai API. This is an async operation. Status code of 202 indicates successful creation of job.

wait_until_ready()[source]
mdai.client.retry_on_http_error(exception)[source]

mdai.preprocess module

class mdai.preprocess.Dataset(dataset_data, images_dir)[source]

Bases: object

A dataset consists of DICOM images and annotations.

Args:
dataset_data:

Dataset json data.

images_dir:

DICOM images directory.

class_id_to_class_text(class_id)[source]
class_text_to_class_id(class_text)[source]
get_annotations(label_ids=None, verbose=False)[source]

Returns annotations, filtered by label ids.

Args:
label_ids (optional):

Filter returned annotations by matching label ids.

verbose (optional:

Print debug messages.

get_annotations_by_image_id(image_id)[source]
get_image_ids(verbose=False)[source]

Returns image ids. Must call prepare() method first in order to generate image ids.

Args:
verbose (Optional):

Print debug message.

label_id_to_class_annotation_mode(label_id)[source]
label_id_to_class_id(label_id)[source]
prepare()[source]
show_classes()[source]
class mdai.preprocess.LabelGroup(label_group_data)[source]

Bases: object

A label group contains multiple labels. Each label has properties such id, name, color, type, scope, annotation mode, rad lex tag ids.

Label type:

Global typed annotations apply to the whole instance (e.g., a CT image), while local typed annotations apply to a part of the image (e.g., ROI bounding box).

Label scope:

Scope can be of study, series, or instance.

Label annotation mode:

Annotation mode can be of bounding boxes, free form, polygon, etc.

get_data()[source]
get_labels()[source]

Get label ids and names

show_labels(print_offset='')[source]

Show labels info

class mdai.preprocess.Project(annotations_fp=None, images_dir=None)[source]

Bases: object

Project consists of label groups, and datasets.

Args:
annotations_fp (str):

File path to the exported JSON annotation file.

images_dir (str):

File path to the DICOM images directory.

get_dataset_by_id(dataset_id)[source]
get_dataset_by_name(dataset_name)[source]
get_datasets()[source]

Get JSON representation of datasets

get_label_group_by_id(label_group_id)[source]
get_label_group_by_name(label_group_name)[source]
get_label_groups()[source]
get_label_id_annotation_mode(label_id)[source]

Return label id’s annotation mode.

get_label_id_scope(label_id)[source]

Return label id’s scope.

get_label_id_type(label_id)[source]

Return label id’s type.

set_labels_dict(labels_dict)[source]
show_datasets()[source]
show_label_groups()[source]

mdai.visualize module

mdai.visualize.apply_mask(image, mask, color, alpha=0.3)[source]

Apply the given mask to the image.

Args:

image: height, widht, channel.

Returns:

image with applied color mask.

mdai.visualize.display_annotations(image, boxes, masks, class_ids, scores=None, title='', figsize=(16, 16), ax=None, show_mask=True, show_bbox=True, colors=None, captions=None)[source]

Display annotations for image.

Args:
boxes:

[num_instance, (y1, x1, y2, x2, class_id)] in image coordinates.

masks:

[height, width, num_instances]

class_ids:

[num_instances]

scores:

(optional) confidence scores for each box

title:

(optional) Figure title

show_mask, show_bbox:

To show masks and bounding boxes or not

figsize:

(optional) the size of the image

colors:

(optional) An array or colors to use with each object

captions:

(optional) A list of strings to use as captions for each object

mdai.visualize.display_images(image_ids, titles=None, cols=3, cmap='gray', norm=None, interpolation=None)[source]

Display images given image ids.

Args:
image_ids (list):

List of image ids.

TODO: figsize should not be hardcoded

mdai.visualize.draw_box_on_image(image, boxes, h, w)[source]

Draw box on an image.

Args:
image:

three channel (e.g. RGB) image.

boxes:

normalized box coordinate (between 0.0 and 1.0).

h:

image height

w:

image width

mdai.visualize.extract_bboxes(mask)[source]

Compute bounding boxes from masks.

Args:
mask [height, width, num_instances]:

Mask pixels are either 1 or 0.

Returns:

bounding box array [num_instances, (y1, x1, y2, x2)].

mdai.visualize.get_image_ground_truth(image_id, dataset)[source]

Load and return ground truth data for an image (image, mask, bounding boxes).

Args:
image_id:

Image id.

Returns:
image:

[height, width, 3]

class_ids:

[instance_count] Integer class IDs

bbox:

[instance_count, (y1, x1, y2, x2)]

mask:

[height, width, instance_count]. The height and width are those of the image unless use_mini_mask is True, in which case they are defined in MINI_MASK_SHAPE.

mdai.visualize.load_dicom_image(image_id, to_RGB=False, rescale=False)[source]

Load a DICOM image.

Args:
image_id (str):

image id (filepath).

to_RGB (bool, optional):

Convert grayscale image to RGB.

Returns:

image array.

mdai.visualize.load_mask(image_id, dataset)[source]

Load instance masks for the given image. Masks can be different types, mask is a binary true/false map of the same size as the image.

mdai.visualize.random_colors(N, bright=True)[source]

Generate random colors. To get visually distinct colors, generate them in HSV space then convert to RGB.

Args:
N (int):

Number of colors.

Module contents

MD.ai Python client library.