Single Dataset Performance#

This notebooks provides an overview for using and understanding single dataset performance check.

Structure:

What Is the Purpose of the Check?#

This check returns the results from a dict of metrics, in the format metric name: scorer, calculated for the given model dataset. The scorer should be either a sklearn scorer or a custom metric (see Metrics Guide for further details). Use this check to evaluate the performance on a single vision dataset such as a test set.

Generate Dataset#

Note

In this example, we use the pytorch version of the mnist dataset and model. In order to run this example using tensorflow, please change the import statements to:

from deepchecks.vision.datasets.classification import mnist_tensorflow as mnist
from deepchecks.vision.checks import SingleDatasetPerformance
from deepchecks.vision.datasets.classification import mnist_torch as mnist
train_ds = mnist.load_dataset(train=True, object_type='VisionData')
You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.

Run the check#

The check will use the default classification metrics - precision and recall.

check = SingleDatasetPerformance()
result = check.run(train_ds)
result.show()
Processing Batches:
|     | 0/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:01]
Processing Batches:
|█████| 1/1 [Time: 00:01]

Computing Check:
|     | 0/1 [Time: 00:00]

Computing Check:
|█████| 1/1 [Time: 00:00]
Single Dataset Performance


To display the results in an IDE like PyCharm, you can use the following code:

#  result.show_in_window()

The result will be displayed in a new window.

Now we will run a check with a metric different from the defaults- F-1. You can read more about setting metrics in the Metrics Guide.

check = SingleDatasetPerformance(scorers={'f1': 'f1_per_class'})
result = check.run(train_ds)
result
Processing Batches:
|     | 0/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:01]
Processing Batches:
|█████| 1/1 [Time: 00:01]

Computing Check:
|     | 0/1 [Time: 00:00]

Computing Check:
|█████| 1/1 [Time: 00:00]
Single Dataset Performance


Define a Condition#

We can define a condition to validate that our model performance score is above or below a certain threshold. The condition is defined as a function that takes the results of the check as input and returns a ConditionResult object.

check = SingleDatasetPerformance()
check.add_condition_greater_than(0.5)
result = check.run(train_ds)
result.show(show_additional_outputs=False)
Processing Batches:
|     | 0/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:01]
Processing Batches:
|█████| 1/1 [Time: 00:01]

Computing Check:
|     | 0/1 [Time: 00:00]

Computing Check:
|█████| 1/1 [Time: 00:00]
Single Dataset Performance


We can also define a condition on a specific metric (or a subset of the metrics) that was passed to the check and a specific class, instead of testing all the metrics and all the classes which is the default mode.

check = SingleDatasetPerformance()
check.add_condition_greater_than(0.8, metrics=['Precision'], class_mode='3')
result = check.run(train_ds)
result.show(show_additional_outputs=False)
Processing Batches:
|     | 0/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:01]
Processing Batches:
|█████| 1/1 [Time: 00:01]

Computing Check:
|     | 0/1 [Time: 00:00]

Computing Check:
|█████| 1/1 [Time: 00:00]
Single Dataset Performance


Total running time of the script: (0 minutes 6.813 seconds)

Gallery generated by Sphinx-Gallery