.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "user-guide/vision/auto_tutorials/plot_detection_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_user-guide_vision_auto_tutorials_plot_detection_tutorial.py: ========================== Object Detection Tutorial ========================== In this tutorial, you will learn how to validate your **object detection model** using deepchecks test suites. You can read more about the different checks and suites for computer vision use cases at the :doc:`examples section ` If you just want to see the output of this tutorial, jump to :ref:`observing_the_result` section. An object detection tasks usually consists of two parts: - Object Localization, where the model predicts the location of an object in the image, - Object Classification, where the model predicts the class of the detected object. The common output of an object detection model is a list of bounding boxes around the objects, and their classes. .. GENERATED FROM PYTHON SOURCE LINES 22-24 Defining the data and model =========================== .. GENERATED FROM PYTHON SOURCE LINES 24-50 .. code-block:: default import math # Importing the required packages import os import time import urllib.request import xml.etree.ElementTree as ET import zipfile from functools import partial import albumentations as A import matplotlib.pyplot as plt import numpy as np import torch import torchvision import torchvision.transforms as T from albumentations.pytorch import ToTensorV2 from PIL import Image from torch import nn from torch.utils.data import DataLoader, Dataset from torchvision.models.detection import _utils as det_utils from torchvision.models.detection.ssdlite import SSDLiteClassificationHead import deepchecks from deepchecks.vision.detection_data import DetectionData .. GENERATED FROM PYTHON SOURCE LINES 51-62 Load Data ~~~~~~~~~ The model in this tutorial is used to detect tomatoes in images. The model is trained on a dataset consisted of 895 images of tomatoes, with bounding box annotations provided in PASCAL VOC format. All annotations belong to a single class: tomato. .. note:: The dataset is available at the following link: https://www.kaggle.com/andrewmvd/tomato-detection We thank the authors of the dataset for providing the dataset. .. GENERATED FROM PYTHON SOURCE LINES 62-135 .. code-block:: default url = 'https://figshare.com/ndownloader/files/34488599' urllib.request.urlretrieve(url, 'tomato-detection.zip') with zipfile.ZipFile('tomato-detection.zip', 'r') as zip_ref: zip_ref.extractall('.') class TomatoDataset(Dataset): def __init__(self, root, transforms): self.root = root self.transforms = transforms self.images = list(sorted(os.listdir(os.path.join(root, 'images')))) self.annotations = list(sorted(os.listdir(os.path.join(root, 'annotations')))) def __getitem__(self, idx): img_path = os.path.join(self.root, "images", self.images[idx]) ann_path = os.path.join(self.root, "annotations", self.annotations[idx]) img = Image.open(img_path).convert("RGB") bboxes = [] labels = [] with open(ann_path, 'r') as f: tree = ET.parse(f) root = tree.getroot() size = root.find('size') w = int(size.find('width').text) h = int(size.find('height').text) for obj in root.iter('object'): difficult = obj.find('difficult').text if int(difficult) == 1: continue cls_id = 1 xmlbox = obj.find('bndbox') b = [float(xmlbox.find('xmin').text), float(xmlbox.find('ymin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymax').text)] bboxes.append(b) labels.append(cls_id) bboxes = torch.as_tensor(np.array(bboxes), dtype=torch.float32) labels = torch.as_tensor(np.array(labels), dtype=torch.int64) if self.transforms is not None: res = self.transforms(image=np.array(img), bboxes=bboxes, class_labels=labels) target = { 'boxes': [torch.Tensor(x) for x in res['bboxes']], 'labels': res['class_labels'] } img = res['image'] return img, target def __len__(self): return len(self.images) data_transforms = A.Compose([ A.Resize(height=256, width=256), A.CenterCrop(height=224, width=224), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ToTensorV2(), ], bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels'])) dataset = TomatoDataset(root=os.path.join(os.path.curdir, 'tomato-detection/data'), transforms=data_transforms) train_set, val_set = torch.utils.data.random_split(dataset, [int(len(dataset)*0.9), len(dataset)-int(len(dataset)*0.9)], generator=torch.Generator().manual_seed(42)) val_set.transforms = A.Compose([ToTensorV2()]) train_loader = DataLoader(train_set, batch_size=64, collate_fn=(lambda batch: tuple(zip(*batch)))) val_loader = DataLoader(val_set, batch_size=64, collate_fn=(lambda batch: tuple(zip(*batch)))) .. GENERATED FROM PYTHON SOURCE LINES 136-139 Visualize a Few Images ~~~~~~~~~~~~~~~~~~~~~~ Let's visualize a few training images so as to understand the data augmentation. .. GENERATED FROM PYTHON SOURCE LINES 139-174 .. code-block:: default def prepare(inp): """Imshow for Tensor.""" inp = inp.numpy().transpose((1, 2, 0)) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) inp = std * inp + mean inp = np.clip(inp, 0, 1) * 255 inp = inp.transpose((2,0,1)) return torch.tensor(inp, dtype=torch.uint8) import torchvision.transforms.functional as F def show(imgs): if not isinstance(imgs, list): imgs = [imgs] fig, axs = plt.subplots(ncols=len(imgs), squeeze=False, figsize=(20,20)) for i, img in enumerate(imgs): img = img.detach() img = F.to_pil_image(img) axs[0, i].imshow(np.asarray(img)) axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) from torchvision.utils import draw_bounding_boxes data = next(iter(train_loader)) inp, targets = data[0][:4], data[1][:4] result = [draw_bounding_boxes(prepare(inp[i]), torch.stack(targets[i]['boxes']), colors=['yellow'] * torch.stack(targets[i]['boxes']).shape[0], width=5) for i in range(len(targets))] show(result) .. image-sg:: /user-guide/vision/auto_tutorials/images/sphx_glr_plot_detection_tutorial_001.png :alt: plot detection tutorial :srcset: /user-guide/vision/auto_tutorials/images/sphx_glr_plot_detection_tutorial_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 175-186 .. image :: /_static/tomatoes.png :alt: Tomatoes with bbox Downloading a Pre-trained Model ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this tutorial, we will download a pre-trained SSDlite model and a MobileNetV3 Large backbone from the official PyTorch repository. For more details, please refer to the `official documentation `_. After downloading the model, we will fine-tune it for our particular classes. We will do it by replacing the pre-trained head with a new one that matches our needs. .. GENERATED FROM PYTHON SOURCE LINES 186-198 .. code-block:: default device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True) in_channels = det_utils.retrieve_out_channels(model.backbone, (320, 320)) num_anchors = model.anchor_generator.num_anchors_per_location() norm_layer = partial(nn.BatchNorm2d, eps=0.001, momentum=0.03) model.head.classification_head = SSDLiteClassificationHead(in_channels, num_anchors, 2, norm_layer) model.to(device) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Downloading: "https://download.pytorch.org/models/ssdlite320_mobilenet_v3_large_coco-a79551df.pth" to /home/runner/.cache/torch/hub/checkpoints/ssdlite320_mobilenet_v3_large_coco-a79551df.pth 0%| | 0.00/13.4M [00:00 First element is: with len of 64 Example output of an image shape from the dataloader torch.Size([3, 224, 224]) Image values tensor([[[-1.79253, -1.82678, -1.82678, ..., 1.39267, 1.34130, 1.32417], [-1.72403, -1.79253, -1.80966, ..., 1.35842, 1.32417, 1.34130], [-1.75828, -1.74116, -1.70691, ..., 1.32417, 1.34130, 1.35842], ..., [-1.84391, -1.82678, -1.75828, ..., 0.62206, 0.19394, -0.35405], [-1.80966, -1.79253, -1.72403, ..., 0.81043, 0.72481, 0.34806], [-1.79253, -1.84391, -1.75828, ..., 0.81043, 0.82755, 0.69056]], [[-1.38796, -1.45798, -1.45798, ..., 1.51821, 1.46569, 1.46569], [-1.38796, -1.47549, -1.52801, ..., 1.50070, 1.46569, 1.48319], [-1.42297, -1.47549, -1.49300, ..., 1.46569, 1.50070, 1.50070], ..., [-1.70308, -1.68557, -1.61555, ..., 0.67787, 0.22269, -0.33753], [-1.68557, -1.66807, -1.58053, ..., 0.87045, 0.74790, 0.38025], [-1.68557, -1.70308, -1.61555, ..., 0.87045, 0.85294, 0.71289]], [[-1.57786, -1.61272, -1.61272, ..., 1.66397, 1.61168, 1.59425], [-1.54301, -1.59529, -1.64758, ..., 1.62911, 1.59425, 1.59425], [-1.59529, -1.59529, -1.61272, ..., 1.59425, 1.61168, 1.62911], ..., [-1.63015, -1.59529, -1.52558, ..., 0.46135, 0.02562, -0.53211], [-1.59529, -1.54301, -1.47329, ..., 0.72279, 0.61821, 0.21734], [-1.59529, -1.59529, -1.50815, ..., 0.72279, 0.72279, 0.56593]]]) -------------------------------------------------------------------------------- Second element is: with len of 64 Example output of a label shape from the dataloader {'boxes': [tensor([ 0.00000, 75.13600, 39.68000, 165.75999]), tensor([ 0.00000, 0.00000, 94.08000, 93.56800])], 'labels': [tensor(1), tensor(1)]} .. GENERATED FROM PYTHON SOURCE LINES 228-241 Implementing the DetectionData class ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The checks in the package validate the model & data by calculating various quantities over the data, labels and predictions. In order to do that, those must be in a pre-defined format, according to the task type. The first step is to implement a class that enables deepchecks to interact with your model and data and transform them to this pre-defined format, which is set for each task type. In this tutorial, we will implement the object detection task type by implementing a class that inherits from the :class:`deepchecks.vision.detection_data.DetectionData` class. The DetectionData class contains additional data and general methods intended for easy access to relevant metadata for object detection ML models validation. To learn more about the expected format please visit the API reference for the :class:`deepchecks.vision.detection_data.DetectionData` class. .. GENERATED FROM PYTHON SOURCE LINES 241-314 .. code-block:: default from deepchecks.vision.detection_data import DetectionData class TomatoData(DetectionData): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def batch_to_images(self, batch): """ Convert a batch of data to images in the expected format. The expected format is an iterable of cv2 images, where each image is a numpy array of shape (height, width, channels). The numbers in the array should be in the range [0, 255] in a uint8 format. """ inp = torch.stack(list(batch[0])).numpy().transpose((0, 2, 3, 1)) mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] # Un-normalize the images inp = std * inp + mean inp = np.clip(inp, 0, 1) return inp * 255 def batch_to_labels(self, batch): """ Convert a batch of data to labels in the expected format. The expected format is a list of tensors of length N, where N is the number of samples. Each tensor element is in a shape of [B, 5], where B is the number of bboxes in the image, and each bounding box is in the structure of [class_id, x, y, w, h]. """ tensor_annotations = batch[1] label = [] for annotation in tensor_annotations: if len(annotation["boxes"]): bbox = torch.stack(annotation["boxes"]) # Convert the Pascal VOC xyxy format to xywh format bbox[:, 2:] = bbox[:, 2:] - bbox[:, :2] # The label shape is [class_id, x, y, w, h] label.append( torch.concat([torch.stack(annotation["labels"]).reshape((-1, 1)), bbox], dim=1) ) else: # If it's an empty image, we need to add an empty label label.append(torch.tensor([])) return label def infer_on_batch(self, batch, model, device): """ Returns the predictions for a batch of data. The expected format is a list of tensors of shape length N, where N is the number of samples. Each tensor element is in a shape of [B, 6], where B is the number of bboxes in the predictions, and each bounding box is in the structure of [x, y, w, h, score, class_id]. """ nm_thrs = 0.2 score_thrs = 0.7 imgs = list(img.to(device) for img in batch[0]) # Getting the predictions of the model on the batch with torch.no_grad(): preds = model(imgs) processed_pred = [] for pred in preds: # Performoing non-maximum suppression on the detections keep_boxes = torchvision.ops.nms(pred['boxes'], pred['scores'], nm_thrs) score_filter = pred['scores'][keep_boxes] > score_thrs # get the filtered result test_boxes = pred['boxes'][keep_boxes][score_filter].reshape((-1, 4)) test_boxes[:, 2:] = test_boxes[:, 2:] - test_boxes[:, :2] # xyxy to xywh test_labels = pred['labels'][keep_boxes][score_filter] test_scores = pred['scores'][keep_boxes][score_filter] processed_pred.append( torch.concat([test_boxes, test_scores.reshape((-1, 1)), test_labels.reshape((-1, 1))], dim=1)) return processed_pred .. GENERATED FROM PYTHON SOURCE LINES 315-316 After defining the task class, we can validate it by running the following code: .. GENERATED FROM PYTHON SOURCE LINES 316-330 .. code-block:: default # We have a single label here, which is the tomato class # The label_map is a dictionary that maps the class id to the class name, for display purposes. LABEL_MAP = { 1: 'Tomato' } training_data = TomatoData(data_loader=train_loader, label_map=LABEL_MAP) val_data = TomatoData(data_loader=val_loader, label_map=LABEL_MAP) training_data.validate_format(model, device=device) val_data.validate_format(model, device=device) # And observe the output: .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Deepchecks will try to validate the extractors given... torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.) Structure validation -------------------- Label formatter: Pass! Prediction formatter: Pass! Image formatter: Pass! Content validation ------------------ For validating the content within the structure you have to manually observe the classes, image, label and prediction. Examples of classes observed in the batch's labels: [[1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1], [1]] Visual images & label & prediction: should open in a new window ******************************************************************************* This machine does not support GUI The formatted image was saved in: /home/runner/work/deepchecks/deepchecks/docs/source/user-guide/vision/tutorials/deepchecks_formatted_image (4).jpg Visual examples of an image with prediction and label data. Label is red, prediction is blue, and deepchecks loves you. validate_extractors can be set to skip the image saving or change the save path ******************************************************************************* Deepchecks will try to validate the extractors given... Structure validation -------------------- Label formatter: Pass! Prediction formatter: Pass! Image formatter: Pass! Content validation ------------------ For validating the content within the structure you have to manually observe the classes, image, label and prediction. Examples of classes observed in the batch's labels: [[1, 1, 1, 1], [1], [1, 1, 1, 1, 1, 1, 1], [1], [1]] Visual images & label & prediction: should open in a new window ******************************************************************************* This machine does not support GUI The formatted image was saved in: /home/runner/work/deepchecks/deepchecks/docs/source/user-guide/vision/tutorials/deepchecks_formatted_image (5).jpg Visual examples of an image with prediction and label data. Label is red, prediction is blue, and deepchecks loves you. validate_extractors can be set to skip the image saving or change the save path ******************************************************************************* .. GENERATED FROM PYTHON SOURCE LINES 331-335 Running Deepchecks' full suite on our data and model! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Now that we have defined the task class, we can validate the model with the full suite of deepchecks. This can be done with this simple few lines of code: .. GENERATED FROM PYTHON SOURCE LINES 335-341 .. code-block:: default from deepchecks.vision.suites import full_suite suite = full_suite() result = suite.run(training_data, val_data, model, device=device) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Full Suite

The suite is composed of various checks such as: Image Dataset Drift, Similar Image Leakage, Image Property Drift, etc...
Each check may contain conditions (which will result in pass / fail / warning / error , represented by / / ! / ) as well as other outputs such as plots or tables.
Suites, checks and conditions can all be modified. Read more about custom suites.


Conditions Summary

Status Check Condition More Info
Class Performance Train-Test scores relative degradation is not greater than 0.1 Average Recall for class Tomato (train=0.06 test=0.04) Average Precision for class Tomato (train=0.03 test=0.01)
Mean Average Precision Report - Test Dataset mAP score is not less than 0.3 mAP score is: 0.01
Mean Average Precision Report - Train Dataset mAP score is not less than 0.3 mAP score is: 0.03
Image Segment Performance - Test Dataset No segment with ratio between score to mean less than 80% Properties with failed segments: Brightness: {'Range': '(-inf, 0.33)', 'Metric': 'Average Precision', 'Ratio': 0.11}, Mean Blue Relative Intensity: {'Range': '[0.2, 0.23)', 'Metric': 'Average Precision', 'Ratio': 0.29}, Mean Green Relative Intensity: {'Range': '[0.41, 0.45)', 'Metric': 'Average Reca...
Image Segment Performance - Train Dataset No segment with ratio between score to mean less than 80% Properties with failed segments: Brightness: {'Range': '[0.32, 0.38)', 'Metric': 'Average Recall', 'Ratio': 0.53}, Mean Blue Relative Intensity: {'Range': '(-inf, 0.2)', 'Metric': 'Average Recall', 'Ratio': 0.68}, Mean Green Relative Intensity: {'Range': '[0.45, 0.48)', 'Metric': 'Average Precision'...
Train Test Prediction Drift PSI <= 0.15 and Earth Mover's Distance <= 0.075 for prediction drift
Image Property Drift Earth Mover's Distance <= 0.1 for image properties drift
Simple Feature Contribution Train-Test properties' Predictive Power Score difference is not greater than 0.2
New Labels Percentage of new labels in the test set not above 0.5%.
Similar Image Leakage Number of similar images between train and test is not greater than 0
Train Test Label Drift PSI <= 0.15 and Earth Mover's Distance <= 0.075 for label drift

Check With Conditions Output

Class Performance

Summarize given metrics on a dataset and model.

Conditions Summary
Status Condition More Info
Train-Test scores relative degradation is not greater than 0.1 Average Recall for class Tomato (train=0.06 test=0.04) Average Precision for class Tomato (train=0.03 test=0.01)
Additional Outputs

Go to top

Mean Average Precision Report - Test Dataset

Summarize mean average precision metrics on a dataset and model per IoU and bounding box area.

Conditions Summary
Status Condition More Info
mAP score is not less than 0.3 mAP score is: 0.01
Additional Outputs
  mAP@[.50::.95] (avg.%) mAP@.50 (%) mAP@.75 (%)
Area size      
All 0.01 0.05 0.01
Small (area < 32^2) 0.00 0.01 0.00
Medium (32^2 < area < 96^2) 0.06 0.20 0.02
Large (area < 96^2) 0.00 0.00 0.00

Go to top

Mean Average Precision Report - Train Dataset

Summarize mean average precision metrics on a dataset and model per IoU and bounding box area.

Conditions Summary
Status Condition More Info
mAP score is not less than 0.3 mAP score is: 0.03
Additional Outputs
  mAP@[.50::.95] (avg.%) mAP@.50 (%) mAP@.75 (%)
Area size      
All 0.03 0.07 0.02
Small (area < 32^2) 0.00 0.00 0.00
Medium (32^2 < area < 96^2) 0.10 0.27 0.06
Large (area < 96^2) 0.02 0.04 0.02

Go to top

Train Test Prediction Drift

Calculate prediction drift between train dataset and test dataset, using statistical measures.

Conditions Summary
Status Condition More Info
PSI <= 0.15 and Earth Mover's Distance <= 0.075 for prediction drift
Additional Outputs
The Drift score is a measure for the difference between two distributions. In this check, drift is measured for the distribution of the following prediction properties: ['Samples Per Class', 'Bounding Box Area (in pixels)', 'Number of Bounding Boxes Per Image'].

Go to top

Image Property Drift

Calculate drift between train dataset and test dataset per image property, using statistical measures.

Conditions Summary
Status Condition More Info
Earth Mover's Distance <= 0.1 for image properties drift
Additional Outputs
The Drift score is a measure for the difference between two distributions. In this check, drift is measured for the distribution of the following image properties: ['Area', 'Aspect Ratio', 'Brightness', 'Mean Blue Relative Intensity', 'Mean Green Relative Intensity', 'Mean Red Relative Intensity', 'RMS Contrast'].

Go to top

Image Segment Performance - Test Dataset

Segment the data by various properties of the image, and compare the performance of the segments.

Conditions Summary
Status Condition More Info
No segment with ratio between score to mean less than 80% Properties with failed segments: Brightness: {'Range': '(-inf, 0.33)', 'Metric': 'Average Precision', 'Ratio': 0.11}, Mean Blue Relative Intensity: {'Range': '[0.2, 0.23)', 'Metric': 'Average Precision', 'Ratio': 0.29}, Mean Green Relative Intensity: {'Range': '[0.41, 0.45)', 'Metric': 'Average Recall', 'Ratio': 0.28}, Mean Red Relative Intensity: {'Range': '(-inf, 0.31)', 'Metric': 'Average Precision', 'Ratio': 0.03}, RMS Contrast: {'Range': '[0.16, 0.19)', 'Metric': 'Average Precision', 'Ratio': 0.45}
Additional Outputs

Go to top

Image Segment Performance - Train Dataset

Segment the data by various properties of the image, and compare the performance of the segments.

Conditions Summary
Status Condition More Info
No segment with ratio between score to mean less than 80% Properties with failed segments: Brightness: {'Range': '[0.32, 0.38)', 'Metric': 'Average Recall', 'Ratio': 0.53}, Mean Blue Relative Intensity: {'Range': '(-inf, 0.2)', 'Metric': 'Average Recall', 'Ratio': 0.68}, Mean Green Relative Intensity: {'Range': '[0.45, 0.48)', 'Metric': 'Average Precision', 'Ratio': 0.54}, Mean Red Relative Intensity: {'Range': '(-inf, 0.31)', 'Metric': 'Average Recall', 'Ratio': 0.71}, RMS Contrast: {'Range': '[0.17, 0.2)', 'Metric': 'Average Precision', 'Ratio': 0.64}
Additional Outputs

Go to top

Train Test Label Drift

Calculate label drift between train dataset and test dataset, using statistical measures.

Conditions Summary
Status Condition More Info
PSI <= 0.15 and Earth Mover's Distance <= 0.075 for label drift
Additional Outputs
The Drift score is a measure for the difference between two distributions. In this check, drift is measured for the distribution of the following label properties: ['Samples Per Class', 'Bounding Box Area (in pixels)', 'Number of Bounding Boxes Per Image'].

Go to top

Check Without Conditions Output

Image Dataset Drift

Calculate drift between the entire train and test datasets (based on image properties) using a trained model.

Additional Outputs
The shown features are the image properties (brightness, aspect ratio, etc.) that are most important for the domain classifier - the domain_classifier trained to distinguish between the train and test datasets.
The percents of explained dataset difference are the importance values for the feature calculated using `permutation_importance`.

Main features contributing to drift

* showing only the top 3 columns, you can change it using n_top_columns param

Go to top

Image Property Outliers - Test Dataset

Find outliers images with respect to the given properties.

Additional Outputs

Property "Aspect Ratio"

No outliers found.

Property "Area"

No outliers found.

Property "Brightness"

No outliers found.

Property "RMS Contrast"

Total number of outliers: 1
Non-outliers range: 0.06 to 0.34
RMS Contrast
0.34
Image

Property "Mean Red Relative Intensity"

Total number of outliers: 4
Non-outliers range: 0.28 to 0.4
Mean Red Relative Intensity
0.41
0.41
0.42
0.44
Image

Property "Mean Green Relative Intensity"

No outliers found.

Property "Mean Blue Relative Intensity"

No outliers found.

Go to top

Image Property Outliers - Train Dataset

Find outliers images with respect to the given properties.

Additional Outputs

Property "Aspect Ratio"

No outliers found.

Property "Area"

No outliers found.

Property "Brightness"

Total number of outliers: 12
Non-outliers range: 0.18 to 0.73
Brightness
0.76
0.76
0.78
0.84
0.86
Image

Property "RMS Contrast"

No outliers found.

Property "Mean Red Relative Intensity"

Total number of outliers: 42
Non-outliers range: 0.27 to 0.4
Mean Red Relative Intensity
0.55
0.56
0.58
0.6
0.65
Image

Property "Mean Green Relative Intensity"

No outliers found.

Property "Mean Blue Relative Intensity"

No outliers found.

Go to top

Label Property Outliers - Test Dataset

Find outliers labels with respect to the given properties.

Additional Outputs

Property "Bounding Box Area (in pixels)"

Total number of outliers: 64
Non-outliers range: 4.92 to 2,214.75
Bounding Box Area (in pixels)
14,698.42
19,073.43
20,311.24
23,410.77
23,756.15
Image

Property "Number of Bounding Boxes Per Image"

Total number of outliers: 5
Non-outliers range: 0 to 13.5
Number of Bounding Boxes Per Image
17
18
20
23
28
Image

Go to top

Label Property Outliers - Train Dataset

Find outliers labels with respect to the given properties.

Additional Outputs

Property "Bounding Box Area (in pixels)"

Total number of outliers: 486
Non-outliers range: 1.8 to 2,680.75
Bounding Box Area (in pixels)
35,735.47
37,933.05
41,130.89
43,709.89
49,459.2
Image

Property "Number of Bounding Boxes Per Image"

Total number of outliers: 54
Non-outliers range: 0 to 12
Number of Bounding Boxes Per Image
42
48
50
51
72
Image

Go to top

Mean Average Recall Report - Test Dataset

Summarize mean average recall metrics on a dataset and model per detections and area range.

Additional Outputs
  AR@1 (%) AR@10 (%) AR@100 (%)
Area size      
All 0.02 0.04 0.04
Small (area < 32^2) 0.00 0.01 0.01
Medium (32^2 < area < 96^2) 0.07 0.17 0.17
Large (area < 96^2) 0.00 0.00 0.00

Go to top

Mean Average Recall Report - Train Dataset

Summarize mean average recall metrics on a dataset and model per detections and area range.

Additional Outputs
  AR@1 (%) AR@10 (%) AR@100 (%)
Area size      
All 0.02 0.06 0.06
Small (area < 32^2) 0.00 0.01 0.01
Medium (32^2 < area < 96^2) 0.09 0.22 0.22
Large (area < 96^2) 0.00 0.02 0.02

Go to top

Confusion Matrix - Test Dataset

Calculate the confusion matrix of the model on the given dataset.

Additional Outputs
Showing 10 of 1 classes:
"No overlapping" categories are labels and prediction which did not have a matching label/prediction.
For example a predictions that did not have a sufficiently overlapping label bounding box will appear under "No overlapping" category in the True Value axis (y-axis).

Go to top

Confusion Matrix - Train Dataset

Calculate the confusion matrix of the model on the given dataset.

Additional Outputs
Showing 10 of 1 classes:
"No overlapping" categories are labels and prediction which did not have a matching label/prediction.
For example a predictions that did not have a sufficiently overlapping label bounding box will appear under "No overlapping" category in the True Value axis (y-axis).

Go to top

Heatmap Comparison

Check if the average image brightness (or bbox location if applicable) is similar between train and test set.

Additional Outputs

Go to top

Other Checks That Weren't Displayed

Check Reason
Simple Model Comparison Check is irrelevant for task of type TaskType.OBJECT_DETECTION
Model Error Analysis Unable to train meaningful error model (r^2 score: 0.2)
Simple Feature Contribution Nothing found
New Labels Nothing found
Similar Image Leakage Nothing found

Go to top

.. rst-class:: sphx-glr-timing **Total running time of the script:** ( 1 minutes 55.265 seconds) .. _sphx_glr_download_user-guide_vision_auto_tutorials_plot_detection_tutorial.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_detection_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_detection_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_