Note
Go to the end to download the full example code
Heatmap Comparison#
This notebooks provides an overview for using and understanding Heatmap comparison check.
Structure:
What Is a Heatmap Comparison?#
Heatmap comparison is a method of detecting data drift in image data. Data drift is simply a change in the distribution of data over time or between several distinct cases. It is also one of the top reasons that a machine learning model performance degrades over time, or when applied to new scenarios.
The Heatmap comparison check simply computes an average image for all images in each dataset, train and test, and visualizes both the average images of both. That way, we can visually compare the difference between the datasets’ brightness distribution. For example, if training data contains significantly more images with sky, we will see that the average train image is brighter in the upper half of the heatmap.
Comparing Labels for Object Detection#
For object detection tasks, it is also possible to visualize Label Drift, by displaying the average of bounding box label coverage. This is done by producing label maps per image, in which each pixel inside a bounding box is white and the rest and black. Then, the average of all these images is displayed.
In our previous example, the drift caused by more images with sky in training would also be visible by a lack of labels in the upper half of the average label map of the training data, due to lack of labels in the sky.
Other Methods of Drift Detection#
Another, more traditional method to detect such drift would be to use statistical methods. Such an approach is covered by several builtin check in the deepchecks.vision package, such as the Label Drift Check or the Image Dataset Drift Check.
Run the Check on a Classification Task (MNIST)#
Imports#
Note
In this example, we use the pytorch version of the mnist dataset and model. In order to run this example using tensorflow, please change the import statements to:
from deepchecks.vision.datasets.classification.mnist_tensorflow import load_dataset
from deepchecks.vision.datasets.classification.mnist_torch import load_dataset
Loading Data#
mnist_data_train = load_dataset(train=True, batch_size=64, object_type='VisionData')
mnist_data_test = load_dataset(train=False, batch_size=64, object_type='VisionData')
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data/train-images-idx3-ubyte.gz
0%| | 0/9912422 [00:00<?, ?it/s]
9913344it [00:00, 203074863.94it/s]
Extracting /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data/train-images-idx3-ubyte.gz to /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data/train-labels-idx1-ubyte.gz
0%| | 0/28881 [00:00<?, ?it/s]
29696it [00:00, 102598065.56it/s]
Extracting /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data/train-labels-idx1-ubyte.gz to /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data/t10k-images-idx3-ubyte.gz
0%| | 0/1648877 [00:00<?, ?it/s]
1649664it [00:00, 109767467.50it/s]
Extracting /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data/t10k-images-idx3-ubyte.gz to /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data/t10k-labels-idx1-ubyte.gz
0%| | 0/4542 [00:00<?, ?it/s]
5120it [00:00, 42190248.49it/s]
Extracting /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data/t10k-labels-idx1-ubyte.gz to /home/runner/work/deepchecks/deepchecks/deepchecks/vision/datasets/assets/mnist/raw_data
from deepchecks.vision.checks import HeatmapComparison
check = HeatmapComparison()
result = check.run(mnist_data_train, mnist_data_test)
result
Processing Train Batches:
| | 0/1 [Time: 00:00]
Processing Train Batches:
|█████| 1/1 [Time: 00:01]
Processing Train Batches:
|█████| 1/1 [Time: 00:01]
Processing Test Batches:
| | 0/1 [Time: 00:00]
Processing Test Batches:
|█████| 1/1 [Time: 00:04]
Processing Test Batches:
|█████| 1/1 [Time: 00:04]
Computing Check:
| | 0/1 [Time: 00:00]
Computing Check:
|█████| 1/1 [Time: 00:00]
To display the results in an IDE like PyCharm, you can use the following code:
# result.show_in_window()
The result will be displayed in a new window.
Run the Check on an Object Detection Task (Coco)#
Note
In this example, we use the pytorch version of the coco dataset and model. In order to run this example using tensorflow, please change the import statements to:
from deepchecks.vision.datasets.detection.coco_tensorflow import load_dataset
from deepchecks.vision.datasets.detection.coco_torch import load_dataset
train_ds = load_dataset(train=True, object_type='VisionData')
test_ds = load_dataset(train=False, object_type='VisionData')
check = HeatmapComparison()
result = check.run(train_ds, test_ds)
result
Processing Train Batches:
| | 0/1 [Time: 00:00]
Processing Train Batches:
|█████| 1/1 [Time: 00:00]
Processing Train Batches:
|█████| 1/1 [Time: 00:00]
Processing Test Batches:
| | 0/1 [Time: 00:00]
Processing Test Batches:
|█████| 1/1 [Time: 00:00]
Processing Test Batches:
|█████| 1/1 [Time: 00:00]
Computing Check:
| | 0/1 [Time: 00:00]
Computing Check:
|█████| 1/1 [Time: 00:00]