Class Performance#

This notebooks provides an overview for using and understanding the class performance check.

Structure:

What Is the Purpose of the Check?#

The class performance check evaluates several metrics on the given model and data and returns all of the results in a single check. The check uses the following default metrics:

Task Type

Property name

Classification

Precision

Classification

Recall

Object Detection

Average Precision

Object Detection

Average Recall

In addition to the default metrics, the check supports custom metrics, as detailed in the Metrics Guide. These can be passed as a list using the scorers parameter of the check, which will override the default metrics.

Imports#

Note

In this example, we use the pytorch version of the mnist dataset and model. In order to run this example using tensorflow, please change the import statements to:

from deepchecks.vision.datasets.classification import mnist_tensorflow as mnist
from deepchecks.vision.checks import ClassPerformance
from deepchecks.vision.datasets.classification import mnist_torch as mnist

Classification Performance Report#

Generate Dataset#

train_ds = mnist.load_dataset(train=True, object_type='VisionData')
test_ds = mnist.load_dataset(train=False, object_type='VisionData')

Run the check#

check = ClassPerformance()
result = check.run(train_ds, test_ds)
result
Processing Train Batches:
|     | 0/1 [Time: 00:00]
Processing Train Batches:
|#####| 1/1 [Time: 00:01]
Processing Train Batches:
|#####| 1/1 [Time: 00:01]

Processing Test Batches:
|     | 0/1 [Time: 00:00]

Processing Test Batches:
|#####| 1/1 [Time: 00:04]

Processing Test Batches:
|#####| 1/1 [Time: 00:04]


Computing Check:
|     | 0/1 [Time: 00:00]


Computing Check:
|#####| 1/1 [Time: 00:00]


Computing Check:
|#####| 1/1 [Time: 00:00]
Class Performance


To display the results in an IDE like PyCharm, you can use the following code:

result.show_in_window()

The result will be displayed in a new window.

Object Detection Class Performance#

For object detection tasks - the default metric that is being calculated it the Average Precision. The definition of the Average Precision is identical to how the COCO dataset defined it - mean of the average precision per class, over the range [0.5, 0.95, 0.05] of IoU thresholds.

Note

In this example, we use the pytorch version of the coco dataset and model. In order to run this example using tensorflow, please change the import statements to:

from deepchecks.vision.datasets.detection import coco_tensorflow as coco
from deepchecks.vision.datasets.detection import coco_torch as coco

Generate Dataset#

We generate a sample dataset of 128 images from the COCO dataset, and using the YOLOv5 model.

train_ds = coco.load_dataset(train=True, object_type='VisionData')
test_ds = coco.load_dataset(train=False, object_type='VisionData')

Run the check#

check = ClassPerformance(show_only='best')
result = check.run(train_ds, test_ds)
result.show()
Processing Train Batches:
|     | 0/1 [Time: 00:00]
Processing Train Batches:
|#####| 1/1 [Time: 00:00]
Processing Train Batches:
|#####| 1/1 [Time: 00:00]

Processing Test Batches:
|     | 0/1 [Time: 00:00]

Processing Test Batches:
|#####| 1/1 [Time: 00:00]

Processing Test Batches:
|#####| 1/1 [Time: 00:00]


Computing Check:
|     | 0/1 [Time: 00:00]


Computing Check:
|#####| 1/1 [Time: 00:00]


Computing Check:
|#####| 1/1 [Time: 00:00]
Class Performance


If you have a GPU, you can speed up this check by calling:

# check.run(train_ds, test_ds, yolo, device=<your GPU>)

To display the results in an IDE like PyCharm, you can use the following code:

# result.show_in_window()

The result will be displayed in a new window.

Define a Condition#

We can also define a condition to validate that our model performance is above a certain threshold. The condition is defined as a function that takes the results of the check as input and returns a ConditionResult object.

check = ClassPerformance(show_only='worst')
check.add_condition_test_performance_greater_than(0.2)
result = check.run(train_ds, test_ds)
result.show()
Processing Train Batches:
|     | 0/1 [Time: 00:00]
Processing Train Batches:
|#####| 1/1 [Time: 00:00]
Processing Train Batches:
|#####| 1/1 [Time: 00:00]

Processing Test Batches:
|     | 0/1 [Time: 00:00]

Processing Test Batches:
|#####| 1/1 [Time: 00:00]

Processing Test Batches:
|#####| 1/1 [Time: 00:00]


Computing Check:
|     | 0/1 [Time: 00:00]


Computing Check:
|#####| 1/1 [Time: 00:00]


Computing Check:
|#####| 1/1 [Time: 00:00]
Class Performance


We detected that for several classes our model performance is below the threshold.

Total running time of the script: ( 0 minutes 11.969 seconds)

Gallery generated by Sphinx-Gallery