Skip to content

Latest commit

 

History

History
73 lines (71 loc) · 5.37 KB

model-configuration.md

File metadata and controls

73 lines (71 loc) · 5.37 KB

Model configuration

Model's static method create_model() has two overloads. One constructs the model from a string (a path or a model name) and the other takes an already constructed InferenceAdapter. The first overload configures a created model with values taken from configuration dict function argument and from model's intermediate representation (IR) stored in .xml in model_info section of rt_info. Values provided in configuration have priority over values in IR rt_info. If no value is specified in configuration nor in rt_info the default value for a model wrapper is used. For Python configuration values are accessible as model wrapper member fields.

List of values

The list features only model wrappers which intoduce new configuration values in their hirachy.

  1. model_type: str - name of a model wrapper to be created
  2. layout: str - layout of input data in the format: "input0:NCHW,input1:NC"

ImageModel and its subclasses

  1. mean_values: List - normalization values, which will be subtracted from image channels for image-input layer during preprocessing
  2. scale_values: List - normalization values, which will divide the image channels for image-input layer
  3. reverse_input_channels: bool - reverse the input channel order
  4. resize_type: str - crop, standard, fit_to_window or fit_to_window_letterbox
  5. embedded_processing: bool - flag that pre/postprocessing embedded

ClassificationModel

  1. topk: int - number of most likely labels
  2. labels: List - list of class labels
  3. path_to_labels: str - path to file with labels. Overrides the labels, if they sets via 'labels' parameter
  4. multilabel: bool - predict a set of labels per image

DetectionModel and its subclasses

  1. confidence_threshold: float - probability threshold value for bounding box filtering
  2. labels: List - List of class labels
  3. path_to_labels: str - path to file with labels. Overrides the labels, if they sets via labels parameter
CTPN
  1. iou_threshold: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
  2. input_size: List - image resolution which is going to be processed. Reshapes network to match a given size
FaceBoxes
  1. iou_threshold: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
NanoDet
  1. iou_threshold: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
  2. num_classes: int - number of classes
UltraLightweightFaceDetection
  1. iou_threshold: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
YOLO and its subclasses
  1. iou_threshold: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering
YoloV4
  1. anchors: List - list of custom anchor values
  2. masks: List - list of mask, applied to anchors for each output layer
YOLOX
  1. iou_threshold: float - threshold for non-maximum suppression (NMS) intersection over union (IOU) filtering

HpeAssociativeEmbedding

  1. target_size: int - image resolution which is going to be processed. Reshapes network to match a given size
  2. aspect_ratio: float - image aspect ratio which is going to be processed. Reshapes network to match a given size
  3. confidence_threshold: float - pose confidence threshold
  4. delta: float
  5. size_divisor: int - width and height of the rehaped model will be a multiple of this value
  6. padding_mode: str - center or right_bottom

OpenPose

  1. target_size: int - image resolution which is going to be processed. Reshapes network to match a given size
  2. aspect_ratio: float - image aspect ratio which is going to be processed. Reshapes network to match a given size
  3. confidence_threshold: float - pose confidence threshold
  4. upsample_ratio: int - upsample ratio of a model backbone
  5. size_divisor: int - width and height of the rehaped model will be a multiple of this value

MaskRCNNModel

  1. confidence_threshold: float - probability threshold value for bounding box filtering
  2. labels: List - list of class labels
  3. path_to_labels: str - path to file with labels. Overrides the labels, if they sets via labels parameter
  4. postprocess_semantic_masks: bool - resize and apply 0.5 threshold to instance segmentation masks

SegmentationModel and its subclasses

  1. labels: List - list of class labels
  2. path_to_labels: str - path to file with labels. Overrides the labels, if they sets via 'labels' parameter
  3. blur_strength: int - blurring kernel size. -1 value means no blurring and no soft_threshold
  4. soft_threshold: float - probability threshold value for bounding box filtering. inf value means no blurring and no soft_threshold
  5. return_soft_prediction: bool - return raw resized model prediction in addition to processed one

Bert and its subclasses

  1. vocab: Dict - mapping from string token to int
  2. input_names: str - comma-separated names of input layers
  3. enable_padding: bool - should be input sequence padded to max sequence len or not

BertQuestionAnswering

  1. output_names: str - comma-separated names of output layers
  2. max_answer_token_num: int
  3. squad_ver: str - SQuAD dataset version used for training. Affects postprocessing

NOTE: OTX AnomalyBase model wrapper adds image_threshold, pixel_threshold, min, max, threshold.