.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "vision/auto_tutorials/quickstarts/plot_segmentation_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_vision_auto_tutorials_quickstarts_plot_segmentation_tutorial.py: .. _vision__segmentation_tutorial: =============================== Semantic Segmentation Tutorial =============================== In this tutorial, you will learn how to validate your **semantic segmentation model** using deepchecks test suites. You can read more about the different checks and suites for computer vision use cases at the :ref:`examples section `. If you just want to see the output of this tutorial, jump to :ref:`observing_the_result` section. A semantic segmentation task is a task where every pixel of the image is labeled with a single class. Therefore, a common output of these tasks is an image of identical size to the input, with a vector for each pixel of the probability for each class. .. code-block:: bash # Before we start, if you don't have deepchecks vision package installed yet, run: import sys !{sys.executable} -m pip install "deepchecks[vision]" --quiet --upgrade # --user # or install using pip from your python environment .. GENERATED FROM PYTHON SOURCE LINES 28-33 Defining the data and model =========================== .. note:: In this tutorial, we use the pytorch to create the dataset and model. To see how this can be done using tensorflow or other frameworks, please visit the :ref:`vision__vision_data_class` guide. .. GENERATED FROM PYTHON SOURCE LINES 35-43 Load Data ~~~~~~~~~ The model in this tutorial is used to detect different object segments in images (labels based on the Pascal VOC dataset). The model is trained to identify 20 different objects (person, bicycle etc.) and background. The dataset itself is the COCO128 dataset with semantic segmentation labels, mapped to the Pascal VOC labels (Originally, the COCO dataset includes more labels, but those have been filtered out). The dataset can be loaded as a pytorch Dataset object from deepchecks.vision.datasets.segmentation, as is done in this tutorial, but can also be loaded as a VisionData object using the "load_dataset" function from that directory, .. GENERATED FROM PYTHON SOURCE LINES 43-52 .. code-block:: default # The full pascal VOC data and information can be found here: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/ # And the COCO128 dataset can be found here: https://www.kaggle.com/datasets/ultralytics/coco128 from deepchecks.vision.datasets.segmentation.segmentation_coco import CocoSegmentationDataset, load_model train_dataset = CocoSegmentationDataset.load_or_download(train=True) test_dataset = CocoSegmentationDataset.load_or_download(train=False) .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/7119623 [00:00`__. .. GENERATED FROM PYTHON SOURCE LINES 68-71 .. code-block:: default model = load_model(pretrained=True) .. rst-class:: sphx-glr-script-out .. code-block:: none Downloading: "https://download.pytorch.org/models/lraspp_mobilenet_v3_large-d234d4ea.pth" to /home/runner/.cache/torch/hub/checkpoints/lraspp_mobilenet_v3_large-d234d4ea.pth .. GENERATED FROM PYTHON SOURCE LINES 72-87 Implementing the VisionData class ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The checks in the package validate the model & data by calculating various quantities over the data, labels and predictions. In order to do that, those must be in a pre-defined format, according to the task type. In the following example we're using pytorch. To see an implementation of this in tensorflow, please refer to :ref:`vision__vision_data_class` guide. For pytorch, we will use our DataLoader, but we'll create a new collate function for it, that transforms the batch to the correct format. Then, we'll create a :class:`deepchecks.vision.vision_data.vision_data.VisionData` object, that will hold the data loader. To learn more about the expected formats, please visit the :ref:`vision__supported_tasks`. First, we'll create the collate function that will be used by the DataLoader. In pytorch, the collate function is used to transform the output batch to any custom format, and we'll use that in order to transform the batch to the correct format for the checks. .. GENERATED FROM PYTHON SOURCE LINES 87-114 .. code-block:: default import torch import torchvision.transforms.functional as F from deepchecks.vision.vision_data import BatchOutputFormat def deepchecks_collate_fn(batch) -> BatchOutputFormat: """Return a batch of images, labels and predictions for a batch of data. The expected format is a dictionary with the following keys: 'images', 'labels' and 'predictions', each value is in the deepchecks format for the task. You can also use the BatchOutputFormat class to create the output. """ # batch received as iterable of tuples of (image, label) and transformed to tuple of iterables of images and labels: batch = tuple(zip(*batch)) # images: images = [tensor.numpy().transpose((1, 2, 0)) for tensor in batch[0]] #labels: labels = batch[1] #predictions: normalized_batch = [F.normalize(img.unsqueeze(0).float() / 255, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) for img in batch[0]] predictions = [model(img)["out"].squeeze(0).detach() for img in normalized_batch] predictions = [torch.nn.functional.softmax(pred, dim=0) for pred in predictions] return BatchOutputFormat(images=images, labels=labels, predictions=predictions) .. GENERATED FROM PYTHON SOURCE LINES 115-116 The label_map is a dictionary that maps the class id to the class name, for display purposes. .. GENERATED FROM PYTHON SOURCE LINES 116-120 .. code-block:: default LABEL_MAP = {0: 'background', 1: 'airplane', 2: 'bicycle', 3: 'bird', 4: 'boat', 5: 'bottle', 6: 'bus', 7: 'car', 8: 'cat', 9: 'chair', 10: 'cow', 11: 'dining table', 12: 'dog', 13: 'horse', 14: 'motorcycle', 15: 'person', 16: 'potted plant', 17: 'sheep', 18: 'couch', 19: 'train', 20: 'tv'} .. GENERATED FROM PYTHON SOURCE LINES 121-123 Now that we have our updated collate function, we can create the dataloader in the deepchecks format, and use it to create a VisionData object: .. GENERATED FROM PYTHON SOURCE LINES 123-133 .. code-block:: default from torch.utils.data import DataLoader from deepchecks.vision import VisionData train_loader = DataLoader(dataset=train_dataset, shuffle=True, collate_fn=deepchecks_collate_fn) test_loader = DataLoader(dataset=test_dataset, shuffle=True, collate_fn=deepchecks_collate_fn) training_data = VisionData(batch_loader=train_loader, task_type='semantic_segmentation', label_map=LABEL_MAP) test_data = VisionData(batch_loader=test_loader, task_type='semantic_segmentation', label_map=LABEL_MAP) .. GENERATED FROM PYTHON SOURCE LINES 134-140 Making sure our data is in the correct format: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The VisionData object automatically validates your data format and will alert you if there is a problem. However, you can also manually view your images and labels to make sure they are in the correct format by using the ``head`` function to conveniently visualize your data: .. GENERATED FROM PYTHON SOURCE LINES 140-143 .. code-block:: default training_data.head() .. rst-class:: sphx-glr-script-out .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 144-148 Running Deepchecks' model evaluation suite on our data and model! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Now that we have defined the task class, we can validate the model with the model evaluation suite of deepchecks. This can be done with this simple few lines of code: .. GENERATED FROM PYTHON SOURCE LINES 148-154 .. code-block:: default from deepchecks.vision.suites import model_evaluation suite = model_evaluation() result = suite.run(training_data, test_data) .. rst-class:: sphx-glr-script-out .. code-block:: none Processing Batches:Train: | | 0/1 [Time: 00:00] Processing Batches:Train: |█████| 1/1 [Time: 00:20] Processing Batches:Train: |█████| 1/1 [Time: 00:20] Computing Single Dataset Checks Train: | | 0/1 [Time: 00:00] Computing Single Dataset Checks Train: |█████| 1/1 [Time: 00:00, Check=Weak Segments Performance] Computing Single Dataset Checks Train: |█████| 1/1 [Time: 00:00, Check=Weak Segments Performance] Processing Batches:Test: | | 0/1 [Time: 00:00] Processing Batches:Test: |█████| 1/1 [Time: 00:19] Processing Batches:Test: |█████| 1/1 [Time: 00:19] Computing Single Dataset Checks Test: | | 0/1 [Time: 00:00] Computing Single Dataset Checks Test: |█████| 1/1 [Time: 00:00, Check=Weak Segments Performance] Computing Single Dataset Checks Test: |█████| 1/1 [Time: 00:00, Check=Weak Segments Performance] Computing Train Test Checks: | | 0/2 [Time: 00:00] Computing Train Test Checks: | | 0/2 [Time: 00:00, Check=Class Performance] Computing Train Test Checks: | | 0/2 [Time: 00:00, Check=Prediction Drift] Computing Train Test Checks: |█████| 2/2 [Time: 00:00, Check=Prediction Drift] Computing Train Test Checks: |█████| 2/2 [Time: 00:00, Check=Prediction Drift] .. GENERATED FROM PYTHON SOURCE LINES 155-160 .. _vision_segmentation_tutorial__observing_the_result: Observing the results: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The results can be saved as a html file with the following code: .. GENERATED FROM PYTHON SOURCE LINES 160-163 .. code-block:: default result.save_as_html('output.html') .. rst-class:: sphx-glr-script-out .. code-block:: none 'output (1).html' .. GENERATED FROM PYTHON SOURCE LINES 164-165 Or, if working inside a notebook, the output can be displayed directly by simply printing the result object: .. GENERATED FROM PYTHON SOURCE LINES 165-168 .. code-block:: default result.show() .. raw:: html
Model Evaluation Suite


.. GENERATED FROM PYTHON SOURCE LINES 169-180 From these results, we can see that mostly our model performs well. However, the model had an issue with identifying a specific class ("bicycle") in the test set, which casued a major degradation in the `Dice metric `_ for that class, as can be seen in the check "Class Performance" under the "Didn't Pass" section. However, as this dataset has very few samples, this would require further investigation. We can also see that there are significant changes between the train and test set, regarding the model's predictions on them. in the "Prediction Drift" check, which checks drift in 3 properties of the predictions, we can see there's a change in the distribution of the predicted classes. This can tell us that the train set is not representing the test set well, even without knowing the actual test set labels. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 45.832 seconds) .. _sphx_glr_download_vision_auto_tutorials_quickstarts_plot_segmentation_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_segmentation_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_segmentation_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_