.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "checks_gallery/vision/distribution/plot_heatmap_comparison.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_checks_gallery_vision_distribution_plot_heatmap_comparison.py: Heatmap Comparison ****************** This notebooks provides an overview for using and understanding Heatmap comparison check. **Structure:** * `What Is a Heatmap Comparison? <#what-is-a-heatmap-comparison>`__ * `Run the Check on a Classification Task <#run-the-check-on-a-classification-task-mnist>`__ * `Run the Check on an Object Detection Task <#run-the-check-on-an-object-detection-task-coco>`__ * `Limit to Specific Classes <#limit-to-specific-classes>`__ What Is a Heatmap Comparison? ============================= Heatmap comparison is a method of detecting data drift in image data. Data drift is simply a change in the distribution of data over time or between several distinct cases. It is also one of the top reasons that a machine learning model performance degrades over time, or when applied to new scenarios. The **Heatmap comparison** check simply computes an average image for all images in each dataset, train and test, and visualizes both the average images of both. That way, we can visually compare the difference between the datasets' brightness distribution. For example, if training data contains significantly more images with sky, we will see that the average train image is brighter in the upper half of the heatmap. Comparing Labels for Object Detection ------------------------------------- For object detection tasks, it is also possible to visualize Label Drift, by displaying the average of bounding box label coverage. This is done by producing label maps per image, in which each pixel inside a bounding box is white and the rest and black. Then, the average of all these images is displayed. In our previous example, the drift caused by more images with sky in training would also be visible by a lack of labels in the upper half of the average label map of the training data, due to lack of labels in the sky. Other Methods of Drift Detection -------------------------------- Another, more traditional method to detect such drift would be to use statistical methods. Such an approach is covered by several builtin check in the deepchecks.vision package, such as the :doc:`Label Drift Check ` or the :doc:`Image Dataset Drift Check `. Run the Check on a Classification Task (MNIST) ============================================== .. GENERATED FROM PYTHON SOURCE LINES 55-57 Imports ------- .. GENERATED FROM PYTHON SOURCE LINES 57-61 .. code-block:: default from deepchecks.vision.checks import TrainTestLabelDrift from deepchecks.vision.datasets.classification.mnist import load_dataset .. GENERATED FROM PYTHON SOURCE LINES 62-64 Loading Data ------------ .. GENERATED FROM PYTHON SOURCE LINES 64-69 .. code-block:: default mnist_data_train = load_dataset(train=True, batch_size=64, object_type='VisionData') mnist_data_test = load_dataset(train=False, batch_size=64, object_type='VisionData') .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/classification/MNIST/raw/train-images-idx3-ubyte.gz 0%| | 0/9912422 [00:00

Heatmap Comparison

Check if the average image brightness (or bbox location if applicable) is similar between train and test set.

Additional Outputs

Note - data sampling: Running on 10000 train data samples out of 60000. Sample size can be controlled with the "n_samples" parameter.



.. GENERATED FROM PYTHON SOURCE LINES 77-79 Run the Check on an Object Detection Task (Coco) ================================================ .. GENERATED FROM PYTHON SOURCE LINES 79-85 .. code-block:: default from deepchecks.vision.datasets.detection.coco import load_dataset train_ds = load_dataset(train=True, object_type='VisionData') test_ds = load_dataset(train=False, object_type='VisionData') .. GENERATED FROM PYTHON SOURCE LINES 86-90 .. code-block:: default check = HeatmapComparison() check.run(train_ds, test_ds) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Heatmap Comparison

Check if the average image brightness (or bbox location if applicable) is similar between train and test set.

Additional Outputs


.. GENERATED FROM PYTHON SOURCE LINES 91-95 Limit to Specific Classes ========================= The check can be limited to compare the bounding box coverage for a specific set of classes. We'll use that to inspect only objects labeled as human (class_id 0) .. GENERATED FROM PYTHON SOURCE LINES 95-99 .. code-block:: default check = HeatmapComparison(classes_to_display=['person']) check.run(train_ds, test_ds) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Heatmap Comparison

Check if the average image brightness (or bbox location if applicable) is similar between train and test set.

Additional Outputs


.. GENERATED FROM PYTHON SOURCE LINES 100-102 We can see a significant increased abundance of humans in the test data, located in the images lower center! .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 5.370 seconds) .. _sphx_glr_download_checks_gallery_vision_distribution_plot_heatmap_comparison.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_heatmap_comparison.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_heatmap_comparison.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_