.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "checks_gallery/vision/distribution/plot_train_test_prediction_drift.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_checks_gallery_vision_distribution_plot_train_test_prediction_drift.py: Train Test Prediction Drift *************************** This notebooks provides an overview for using and understanding the vision prediction drift check. **Structure:** * `What is a prediction drift? <#what-is-a-prediction-drift>`__ * `Which Prediction Properties Are Used? <#which-prediction-properties-are-used>`__ * `Run check on a Classification task <#run-the-check-on-a-classification-task-mnist>`__ * `Run check on an Object Detection task <#run-the-check-on-an-object-detection-task-coco>`__ What Is a Prediction Drift? =========================== The term drift (and all it's derivatives) is used to describe any change in the data compared to the data the model was trained on. Prediction drift refers to the case in which a change in the data (data/feature drift) has happened and as a result, the distribution of the models' prediction has changed. Calculating prediction drift is especially useful in cases in which labels are not available for the test dataset, and so a drift in the predictions is out only indication that a changed has happened in the data that actually affects model predictions. If labels are available, it's also recommended to run the :doc:`Label Drift Check . There are two main causes for prediction drift: * A change in the sample population. In this case, the underline phenomenon we're trying to predict behaves the same, but we're not getting the same types of samples. For example, cronuts becoming more popular in a food classification dataset. * Concept drift, which means that the underline relation between the data and the label has changed. For example, the arctic hare changes its fur color during the winter. A dataset that was trained on summertime hares, would have difficulty identifying them in winter. Important to note that concept drift won't necessarily result in prediction drift, unless it affects features that are of high importance to the model. How Does the TrainTestPredictionDrift Check Work? ================================================= There are many methods to detect drift, that usually include statistical methods that aim to measure difference between 2 distributions. We experimented with various approaches and found that for detecting drift between 2 one-dimensional distributions, the following 2 methods give the best results: * For numerical features, the `Population Stability Index (PSI) `__ * For categorical features, the `Wasserstein Distance (Earth Mover's Distance) `__ However, one does not simply measure drift on a prediction, as they may be complex structures. These methods are implemented on label properties, as described in the next section. Different measurement on predictions ==================================== In computer vision specifically, our predictions may be complex, and measuring their drift is not a straightforward task. Therefore, we calculate drift on different properties of the labels, on which we can directly measure drift. Which Prediction Properties Are Used? ================================= ================ =================================== ========== Task Type Property name What is it ================ =================================== ========== Classification Samples Per Class Number of images per class Object Detection Samples Per Class Number of bounding boxes per class Object Detection Bounding Box Area Area of bounding box (height * width) Object Detection Number of Bounding Boxes Per Image Number of bounding box objects in each image ================ =================================== ========== Run the check on a Classification task (MNIST) ============================================== .. GENERATED FROM PYTHON SOURCE LINES 77-79 Imports ------- .. GENERATED FROM PYTHON SOURCE LINES 79-84 .. code-block:: default from deepchecks.vision.checks import TrainTestPredictionDrift from deepchecks.vision.datasets.classification.mnist import (load_dataset, load_model) .. GENERATED FROM PYTHON SOURCE LINES 85-87 Loading data and model: ----------------------- .. GENERATED FROM PYTHON SOURCE LINES 87-92 .. code-block:: default train_ds = load_dataset(train=True, batch_size=64, object_type='VisionData') test_ds = load_dataset(train=False, batch_size=64, object_type='VisionData') .. GENERATED FROM PYTHON SOURCE LINES 93-96 .. code-block:: default model = load_model() .. GENERATED FROM PYTHON SOURCE LINES 97-99 Running TrainTestLabelDrift on classification --------------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 99-103 .. code-block:: default check = TrainTestPredictionDrift() check.run(train_ds, test_ds, model) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Train Test Prediction Drift

Calculate prediction drift between train dataset and test dataset, using statistical measures.

Additional Outputs
The Drift score is a measure for the difference between two distributions. In this check, drift is measured for the distribution of the following prediction properties: ['Samples Per Class'].

Note - data sampling: Running on 10000 train data samples out of 60000. Sample size can be controlled with the "n_samples" parameter.



.. GENERATED FROM PYTHON SOURCE LINES 104-109 Understanding the results ------------------------- We can see there is almost no drift between the train & test labels. This means the split to train and test was good (as it is balanced and random). Let's check the performance of a simple model trained on MNIST. .. GENERATED FROM PYTHON SOURCE LINES 109-114 .. code-block:: default from deepchecks.vision.checks import ClassPerformance ClassPerformance().run(train_ds, test_ds, model) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Class Performance

Summarize given metrics on a dataset and model.

Additional Outputs

Note - data sampling: Running on 10000 train data samples out of 60000. Sample size can be controlled with the "n_samples" parameter.



.. GENERATED FROM PYTHON SOURCE LINES 115-121 MNIST with label drift ====================== Now, let's try to separate the MNIST dataset in a different manner that will result in a prediction drift, and see how it affects the performance. We are going to create a custom collate_fn in the test dataset, that will select samples with class 0 in a 1/10 chances. .. GENERATED FROM PYTHON SOURCE LINES 121-128 .. code-block:: default import torch mnist_dataloader_train = load_dataset(train=True, batch_size=64, object_type='DataLoader') mnist_dataloader_test = load_dataset(train=False, batch_size=64, object_type='DataLoader') full_mnist = torch.utils.data.ConcatDataset([mnist_dataloader_train.dataset, mnist_dataloader_test.dataset]) .. GENERATED FROM PYTHON SOURCE LINES 129-132 .. code-block:: default train_dataset, test_dataset = torch.utils.data.random_split(full_mnist, [60000,10000], generator=torch.Generator().manual_seed(42)) .. GENERATED FROM PYTHON SOURCE LINES 133-135 Inserting drift to the test set ------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 135-159 .. code-block:: default import numpy as np from torch.utils.data._utils.collate import default_collate np.random.seed(42) def collate_test(batch): modified_batch = [] for item in batch: image, label = item if label == 0: if np.random.randint(5) == 0: modified_batch.append(item) else: modified_batch.append((image, 1)) else: modified_batch.append(item) return default_collate(modified_batch) mod_train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64) mod_test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, collate_fn=collate_test) .. GENERATED FROM PYTHON SOURCE LINES 160-179 .. code-block:: default from deepchecks.vision.datasets.classification.mnist import MNISTData mod_train_ds = MNISTData(mod_train_loader) mod_test_ds = MNISTData(mod_test_loader) # Run the check # ------------- check = TrainTestPredictionDrift() check.run(mod_train_ds, mod_test_ds, model) # Add a condition # --------------- # We could also add a condition to the check to alert us to changes in the prediction # distribution, such as the one that occurred here. check = TrainTestPredictionDrift().add_condition_drift_score_not_greater_than() check.run(mod_train_ds, mod_test_ds, model) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Train Test Prediction Drift

Calculate prediction drift between train dataset and test dataset, using statistical measures.

Conditions Summary
Status Condition More Info
PSI <= 0.15 and Earth Mover's Distance <= 0.075 for prediction drift
Additional Outputs
The Drift score is a measure for the difference between two distributions. In this check, drift is measured for the distribution of the following prediction properties: ['Samples Per Class'].

Note - data sampling: Running on 10000 train data samples out of 60000. Sample size can be controlled with the "n_samples" parameter.



.. GENERATED FROM PYTHON SOURCE LINES 180-181 As we can see, the condition alerts us to the present of drift in the prediction. .. GENERATED FROM PYTHON SOURCE LINES 183-190 Results ------- We can see the check successfully detects the (expected) drift in class 0 distribution between the train and test sets. It means the the model correctly predicted 0 for those samples and so we're seeing drift in the predictions as well as the labels. We note that this check enabled us to detect the presence of label drift (in this case) without needing actual labels for the test data. .. GENERATED FROM PYTHON SOURCE LINES 192-194 But how does this affect the performance of the model? ------------------------------------------------------ .. GENERATED FROM PYTHON SOURCE LINES 194-197 .. code-block:: default ClassPerformance().run(mod_train_ds, mod_test_ds, model) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Class Performance

Summarize given metrics on a dataset and model.

Additional Outputs

Note - data sampling: Running on 10000 train data samples out of 60000. Sample size can be controlled with the "n_samples" parameter.



.. GENERATED FROM PYTHON SOURCE LINES 198-200 Inferring the results --------------------- .. GENERATED FROM PYTHON SOURCE LINES 200-204 .. code-block:: default # We can see the drop in the precision of class 0, which was caused by the class # imbalance indicated earlier by the label drift check. .. GENERATED FROM PYTHON SOURCE LINES 205-207 Run the check on an Object Detection task (COCO) ================================================ .. GENERATED FROM PYTHON SOURCE LINES 207-214 .. code-block:: default from deepchecks.vision.datasets.detection.coco import load_dataset, load_model train_ds = load_dataset(train=True, object_type='VisionData') test_ds = load_dataset(train=False, object_type='VisionData') model = load_model(pretrained=True) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Downloading: "https://github.com/ultralytics/yolov5/archive/v6.1.zip" to /home/runner/.cache/torch/hub/v6.1.zip Downloading https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt to yolov5s.pt... 0%| | 0.00/14.1M [00:00

Train Test Prediction Drift

Calculate prediction drift between train dataset and test dataset, using statistical measures.

Additional Outputs
The Drift score is a measure for the difference between two distributions. In this check, drift is measured for the distribution of the following prediction properties: ['Samples Per Class', 'Bounding Box Area (in pixels)', 'Number of Bounding Boxes Per Image'].


.. GENERATED FROM PYTHON SOURCE LINES 220-226 Prediction drift is detected! ----------------------------- We can see that the COCO128 contains a drift in the out of the box dataset. In addition to the prediction count per class, the prediction drift check for object detection tasks include drift calculation on certain measurements, like the bounding box area and the number of bboxes per image. .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 46.905 seconds) .. _sphx_glr_download_checks_gallery_vision_distribution_plot_train_test_prediction_drift.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_train_test_prediction_drift.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_train_test_prediction_drift.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_