Image Property Outliers#

This notebooks provides an overview for using and understanding the image property outliers check, used to detect outliers in simple image properties in a dataset.

Structure:

Why Check for Outliers?#

Examining outliers may help you gain insights that you couldn’t have reached from taking an aggregate look or by inspecting random samples. For example, it may help you understand you have some corrupt samples (e.g. an image that is completely black), or samples you didn’t expect to have (e.g. extreme aspect ratio). In some cases, these outliers may help debug some performance discrepancies (the model can be excused for failing on a totally dark image). In more extreme cases, the outlier samples may indicate the presence of samples interfering with the model’s training by teaching the model to fit “irrelevant” samples.

How Does the Check Work?#

Ideally we would like to directly find images which are outliers, but this is computationally expensive and does not have a clear and explainable results. Therefore, we use image properties in order to find outliers (such as brightness, aspect ratio etc.) which are much more efficient to compute, and each outlier is easily explained.

We use Interquartile Range to define our upper and lower limit for the properties’ values.

Which Image Properties Are Used?#

By default the checks use the built-in image properties, and it’s also possible to replace the default properties with custom ones. For the list of the built-in image properties and explanation about custom properties refer to vision properties.

Run the Check#

For the example we will load COCO object detection data, and will run the check with the default properties.

Note

In this example, we use the pytorch version of the coco dataset and model. In order to run this example using tensorflow, please change the import statements to:

from deepchecks.vision.datasets.detection.coco_tensorflow import load_dataset
from deepchecks.vision.checks import ImagePropertyOutliers
from deepchecks.vision.datasets.detection.coco_torch import load_dataset

train_data = load_dataset(train=True, object_type='VisionData')
check = ImagePropertyOutliers()
result = check.run(train_data)
result
Downloading https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt to yolov5s.pt...

  0%|          | 0.00/14.1M [00:00<?, ?B/s]
100%|██████████| 14.1M/14.1M [00:00<00:00, 281MB/s]


Processing Batches:
|     | 0/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:00]

Computing Check:
|     | 0/1 [Time: 00:00]

Computing Check:
|█████| 1/1 [Time: 00:00]

Computing Check:
|█████| 1/1 [Time: 00:00]
Image Property Outliers


To display the results in an IDE like PyCharm, you can use the following code:

#  result.show_in_window()

The result will be displayed in a new window.

Observe Graphic Result#

The check shows a section for each property. In each section we show the number of outliers and the non-outlier property range, and also the images with the lowest and highest values for the property.

For example in property “RMS Contrast” we can see that only 3 outliers were found, 1 below the normal property range and 2 above. Now we can inspect these images and decide if we wish to ignore these kinds of samples or if we would like the model to be able to support them, in which case we may take a close look into the model’s predictions on these samples.

Observe Result Value#

The check returns CheckResult object with a property ‘value’ on it which contain the information that was calculated in the check’s run.

result.value
{'Aspect Ratio': {'outliers_identifiers': array(['4', '6', '8', '10', '22', '27', '31', '0', '8', '14', '31'], dtype='<U2'), 'lower_limit': 0.340625, 'upper_limit': 1.3029296874999998}, 'Area': {'outliers_identifiers': array(['6', '11', '13', '14', '25', '26', '12', '13', '18', '26', '28', '29', '30'], dtype='<U2'), 'lower_limit': 220800.0, 'upper_limit': 359040.0}, 'Brightness': {'outliers_identifiers': array(['28', '6', '15', '22', '23', '30'], dtype='<U2'), 'lower_limit': 61.27082652875001, 'upper_limit': 173.90810731874998}, 'RMS Contrast': {'outliers_identifiers': array(['22', '24', '29'], dtype='<U2'), 'lower_limit': 24.51570321762265, 'upper_limit': 95.40458644989681}, 'Mean Red Relative Intensity': {'outliers_identifiers': array(['4', '5', '18', '23', '28', '29'], dtype='<U2'), 'lower_limit': 0.2419801130950117, 'upper_limit': 0.4767236305793747}, 'Mean Green Relative Intensity': {'outliers_identifiers': array(['3', '16', '18', '22', '28', '29', '31'], dtype='<U2'), 'lower_limit': 0.2814557851840196, 'upper_limit': 0.4017981850348855}, 'Mean Blue Relative Intensity': {'outliers_identifiers': array(['18', '23', '28'], dtype='<U2'), 'lower_limit': 0.15875656469424135, 'upper_limit': 0.41356626825903786}}

Total running time of the script: (0 minutes 6.169 seconds)

Gallery generated by Sphinx-Gallery