.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "user-guide/vision/auto_quickstarts/plot_simple_classification_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_user-guide_vision_auto_quickstarts_plot_simple_classification_tutorial.py: .. _vision_simple_classification_tutorial: ================================== Image Data Validation in 5 Minutes ================================== Deepchecks Vision is built to validate your data and model, however complex your model and data may be. That being said, sometime there is no need to write a full-blown :doc:`ClassificationData ` or :doc:`DetectionData `. In the case of a simple classification task, there is quite a few checks that can be run writing only a few lines of code. In this tutorial, we will show you how to run all checks that do not require a model on a simple classification task. This is ideal, for example, when receiving a new dataset for a classification task. Running these checks on the dataset before even starting with training will give you a quick idea of how the dataset looks like and what potential issues it contains. .. GENERATED FROM PYTHON SOURCE LINES 21-37 Downloading the Data ==================== For this example we'll use a small sample of the RGB `EuroSAT dataset `_. EuroSAT dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting of 10 classes with 27000 labeled and geo-referenced samples. Citations: [1] Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. Patrick Helber, Benjamin Bischke, Andreas Dengel, Damian Borth. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019. [2] Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification. Patrick Helber, Benjamin Bischke, Andreas Dengel. 2018 IEEE International Geoscience and Remote Sensing Symposium, 2018. .. GENERATED FROM PYTHON SOURCE LINES 37-49 .. code-block:: default import urllib.request import zipfile import numpy as np url = 'https://figshare.com/ndownloader/files/34912884' urllib.request.urlretrieve(url, 'EuroSAT_data.zip') with zipfile.ZipFile('EuroSAT_data.zip', 'r') as zip_ref: zip_ref.extractall('EuroSAT') .. GENERATED FROM PYTHON SOURCE LINES 50-62 Loading a Simple Classification Dataset ======================================== A simple classification dataset is an image dataset structured in the following way: - root/ - train/ - class1/ image1.jpeg - test/ - class1/ image1.jpeg .. GENERATED FROM PYTHON SOURCE LINES 62-68 .. code-block:: default from deepchecks.vision import classification_dataset_from_directory train_ds, test_ds = classification_dataset_from_directory( root='./EuroSAT/euroSAT/', object_type='VisionData', image_extension='jpg') .. GENERATED FROM PYTHON SOURCE LINES 69-76 Running Deepchecks' train_test_validation suite =============================================== That's it, we have just defined the classification data object and are ready can run the different deepchecks suites and checks. Here we will demonstrate how to run train_test_validation suite: for additional information on the different suites and checks available see our :doc:`check gallery ` .. GENERATED FROM PYTHON SOURCE LINES 76-82 .. code-block:: default from deepchecks.vision.suites import train_test_validation suite = train_test_validation() result = suite.run(train_ds, test_ds) .. rst-class:: sphx-glr-script-out .. code-block:: none Validating Input: | | 0/1 [Time: 00:00] Validating Input: |#####| 1/1 [Time: 00:00] Ingesting Batches - Train Dataset: | | 0/31 [Time: 00:00] Ingesting Batches - Train Dataset: |#### | 4/31 [Time: 00:00] Ingesting Batches - Train Dataset: |######## | 8/31 [Time: 00:00] Ingesting Batches - Train Dataset: |############ | 12/31 [Time: 00:00] Ingesting Batches - Train Dataset: |################ | 16/31 [Time: 00:00] Ingesting Batches - Train Dataset: |#################### | 20/31 [Time: 00:00] Ingesting Batches - Train Dataset: |######################## | 24/31 [Time: 00:00] Ingesting Batches - Train Dataset: |############################ | 28/31 [Time: 00:00] Ingesting Batches - Train Dataset: |###############################| 31/31 [Time: 00:00] Ingesting Batches - Test Dataset: | | 0/32 [Time: 00:00] Ingesting Batches - Test Dataset: |#### | 4/32 [Time: 00:00] Ingesting Batches - Test Dataset: |######## | 8/32 [Time: 00:00] Ingesting Batches - Test Dataset: |############ | 12/32 [Time: 00:00] Ingesting Batches - Test Dataset: |################ | 16/32 [Time: 00:00] Ingesting Batches - Test Dataset: |#################### | 20/32 [Time: 00:00] Ingesting Batches - Test Dataset: |######################## | 24/32 [Time: 00:00] Ingesting Batches - Test Dataset: |############################ | 28/32 [Time: 00:00] Ingesting Batches - Test Dataset: |################################| 32/32 [Time: 00:00] Computing Checks: | | 0/6 [Time: 00:00] Computing Checks: | | 0/6 [Time: 00:00, Check=New Labels] Computing Checks: | | 0/6 [Time: 00:00, Check=Heatmap Comparison] Computing Checks: | | 0/6 [Time: 00:00, Check=Train Test Label Drift] Computing Checks: | | 0/6 [Time: 00:00, Check=Image Property Drift] Computing Checks: |#### | 4/6 [Time: 00:00, Check=Image Property Drift] Computing Checks: |#### | 4/6 [Time: 00:00, Check=Image Dataset Drift] Computing Checks: |#### | 4/6 [Time: 00:00, Check=Property Label Correlation Change] Computing Checks: |######| 6/6 [Time: 00:01, Check=Property Label Correlation Change] Computing Checks: |######| 6/6 [Time: 00:01, Check=Property Label Correlation Change] .. GENERATED FROM PYTHON SOURCE LINES 83-86 Observing the Results: ====================== The results can be saved as a html file with the following code: .. GENERATED FROM PYTHON SOURCE LINES 86-89 .. code-block:: default result.save_as_html('output.html') .. rst-class:: sphx-glr-script-out .. code-block:: none 'output.html' .. GENERATED FROM PYTHON SOURCE LINES 90-91 Or, if working inside a notebook, the output can be displayed directly by simply printing the result object: .. GENERATED FROM PYTHON SOURCE LINES 91-94 .. code-block:: default result.show() .. raw:: html
Train Test Validation Suite


.. GENERATED FROM PYTHON SOURCE LINES 95-115 Understanding the Results: =========================== Looking at the results we see two checks whose conditions have failed: 1. Similar Image Leakage 2. Feature Label Correlation The first has clearly failed due to the naturally occurring similarity between different ocean / lake image, and the prevailing green of some forest images. We may wish to remove some of these duplicate images but for this dataset they make sense. The second failure is more interesting. The :doc:`Feature Label Correlation Change ` check computes various :doc:`image properties ` and checks if the image label can be inferred using a simple model (for example, a Classification Tree) using the property values. The ability to predict the label using these properties is measured by the Predictive Power Score (PPS) and this measure is compared between the training and test dataset. In this case, the condition alerts us to the fact that this PPS for the "RMS Contrast" property was significantly higher in the training dataset than in the test dataset. We'll show the relevant plot again for ease of discussion: .. GENERATED FROM PYTHON SOURCE LINES 115-120 .. code-block:: default check_idx = np.where([result.results[i].check.name() == 'Property Label Correlation Change' for i in range(len(result.results))])[0][0] result.results[check_idx] .. raw:: html
Feature Label Correlation Change


.. GENERATED FROM PYTHON SOURCE LINES 121-128 Here we can see the plot dedicated to the PPS of the property RMS Contrast, which measures the contrast in the image by calculating the grayscale standard deviation of the image. This plot shows us that specifically for the classes "Forest" and "SeaLake" (the same culprits from the Similar Image Leakage condition), the contrast is a great predictor, but only in the training data! This means we have a critical problem - or model may learn to classify these classes using only the contrast, without actually learning anything about the image content. We now can go on and fix this issue (perhaps by adding train augmentations, or enriching our training set) even before we start thinking about what model to train for the task. .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 7.072 seconds) .. _sphx_glr_download_user-guide_vision_auto_quickstarts_plot_simple_classification_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_simple_classification_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_simple_classification_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_