Image Data Validation in 5 Minutes#

Deepchecks Vision is built to validate your data and model, however complex your model and data may be. That being said, sometime there is no need to write a full-blown ClassificationData or DetectionData. In the case of a simple classification task, there is quite a few checks that can be run writing only a few lines of code. In this tutorial, we will show you how to run all checks that do not require a model on a simple classification task.

This is ideal, for example, when receiving a new dataset for a classification task. Running these checks on the dataset before even starting with training will give you a quick idea of how the dataset looks like and what potential issues it contains.

# Before we start, if you don't have deepchecks vision package installed yet, run:
import sys
!{sys.executable} -m pip install "deepchecks[vision]" --quiet --upgrade # --user

# or install using pip from your python environment

Downloading the Data#

For this example we’ll use a small sample of the RGB EuroSAT dataset. EuroSAT dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting of 10 classes with 27000 labeled and geo-referenced samples.

Citations:

[1] Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. Patrick Helber, Benjamin Bischke, Andreas Dengel, Damian Borth. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019.

[2] Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification. Patrick Helber, Benjamin Bischke, Andreas Dengel. 2018 IEEE International Geoscience and Remote Sensing Symposium, 2018.

import urllib.request
import zipfile

import numpy as np

url = 'https://figshare.com/ndownloader/files/34912884'
urllib.request.urlretrieve(url, 'EuroSAT_data.zip')

with zipfile.ZipFile('EuroSAT_data.zip', 'r') as zip_ref:
    zip_ref.extractall('EuroSAT')

Loading a Simple Classification Dataset#

A simple classification dataset is an image dataset structured in the following way:

  • root/
    • train/
      • class1/

        image1.jpeg

    • test/
      • class1/

        image1.jpeg

from deepchecks.vision import classification_dataset_from_directory

train_ds, test_ds = classification_dataset_from_directory(
    root='./EuroSAT/euroSAT/', object_type='VisionData', image_extension='jpg')

Running Deepchecks’ train_test_validation suite#

That’s it, we have just defined the classification data object and are ready can run the different deepchecks suites and checks. Here we will demonstrate how to run train_test_validation suite:

for additional information on the different suites and checks available see our Vision Checks gallery.

from deepchecks.vision.suites import train_test_validation

suite = train_test_validation()
result = suite.run(train_ds, test_ds)
Validating Input:
|     | 0/1 [Time: 00:00]
Validating Input:
|#####| 1/1 [Time: 00:00]

Ingesting Batches - Train Dataset:
|                               | 0/31 [Time: 00:00]

Ingesting Batches - Train Dataset:
|####                           | 4/31 [Time: 00:00]

Ingesting Batches - Train Dataset:
|########                       | 8/31 [Time: 00:00]

Ingesting Batches - Train Dataset:
|############                   | 12/31 [Time: 00:00]

Ingesting Batches - Train Dataset:
|################               | 16/31 [Time: 00:00]

Ingesting Batches - Train Dataset:
|####################           | 20/31 [Time: 00:00]

Ingesting Batches - Train Dataset:
|########################       | 24/31 [Time: 00:00]

Ingesting Batches - Train Dataset:
|############################   | 28/31 [Time: 00:00]

Ingesting Batches - Train Dataset:
|###############################| 31/31 [Time: 00:00]


Ingesting Batches - Test Dataset:
|                                | 0/32 [Time: 00:00]


Ingesting Batches - Test Dataset:
|####                            | 4/32 [Time: 00:00]


Ingesting Batches - Test Dataset:
|########                        | 8/32 [Time: 00:00]


Ingesting Batches - Test Dataset:
|############                    | 12/32 [Time: 00:00]


Ingesting Batches - Test Dataset:
|################                | 16/32 [Time: 00:00]


Ingesting Batches - Test Dataset:
|####################            | 20/32 [Time: 00:00]


Ingesting Batches - Test Dataset:
|########################        | 24/32 [Time: 00:00]


Ingesting Batches - Test Dataset:
|############################    | 28/32 [Time: 00:00]


Ingesting Batches - Test Dataset:
|################################| 32/32 [Time: 00:00]



Computing Checks:
|      | 0/6 [Time: 00:00]



Computing Checks:
|      | 0/6 [Time: 00:00, Check=New Labels]



Computing Checks:
|      | 0/6 [Time: 00:00, Check=Heatmap Comparison]



Computing Checks:
|      | 0/6 [Time: 00:00, Check=Train Test Label Drift]



Computing Checks:
|      | 0/6 [Time: 00:00, Check=Image Property Drift]



Computing Checks:
|####  | 4/6 [Time: 00:00, Check=Image Property Drift]



Computing Checks:
|####  | 4/6 [Time: 00:00, Check=Image Dataset Drift]



Computing Checks:
|####  | 4/6 [Time: 00:00, Check=Property Label Correlation Change]



Computing Checks:
|######| 6/6 [Time: 00:01, Check=Property Label Correlation Change]



Computing Checks:
|######| 6/6 [Time: 00:01, Check=Property Label Correlation Change]

Observing the Results#

The results can be saved as a html file with the following code:

result.save_as_html('output.html')
'output.html'

Or, if working inside a notebook, the output can be displayed directly by simply printing the result object:

result.show()
Train Test Validation Suite


Understanding the Results#

Looking at the results we see two checks whose conditions have failed:

  1. Similar Image Leakage

  2. Feature Label Correlation

The first has clearly failed due to the naturally occurring similarity between different ocean / lake image, and the prevailing green of some forest images. We may wish to remove some of these duplicate images but for this dataset they make sense.

The second failure is more interesting. The Property Label Correlation Change check computes various image properties and checks if the image label can be inferred using a simple model (for example, a Classification Tree) using the property values. The ability to predict the label using these properties is measured by the Predictive Power Score (PPS) and this measure is compared between the training and test dataset. In this case, the condition alerts us to the fact that this PPS for the “RMS Contrast” property was significantly higher in the training dataset than in the test dataset.

We’ll show the relevant plot again for ease of discussion:

check_idx = np.where([result.results[i].check.name() == 'Property Label Correlation Change'
                      for i in range(len(result.results))])[0][0]
result.results[check_idx]
Feature Label Correlation Change


Here we can see the plot dedicated to the PPS of the property RMS Contrast, which measures the contrast in the image by calculating the grayscale standard deviation of the image. This plot shows us that specifically for the classes “Forest” and “SeaLake” (the same culprits from the Similar Image Leakage condition), the contrast is a great predictor, but only in the training data! This means we have a critical problem - or model may learn to classify these classes using only the contrast, without actually learning anything about the image content. We now can go on and fix this issue (perhaps by adding train augmentations, or enriching our training set) even before we start thinking about what model to train for the task.

Total running time of the script: ( 0 minutes 6.703 seconds)

Gallery generated by Sphinx-Gallery