.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "checks_gallery/vision/performance/plot_class_performance.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_checks_gallery_vision_performance_plot_class_performance.py: Class Performance *********************** This notebooks provides an overview for using and understanding the class performance check. **Structure:** * `What is the purpose of the check? <#what-is-the-purpose-of-the-check>`__ * `Classification <#classification-performance-report>`__ - `Generate data & model <#generate-data-and-model>`__ - `Run the check <#run-the-check>`__ * `Object Detection <#object-detection-class-performance>`__ - `Generate data & model <#id1>`__ - `Run the check <#id2>`__ What Is the Purpose of the Check? ================================= The class performance check evaluates several metrics on the given model and data and returns all of the results in a single check. The check uses the following default metrics: ================= ==================== Task Type Property name ================= ==================== Classification Precision Classification Recall Object Detection `Average Precision `__ Object Detection `Average Recall `__ ================= ==================== In addition to the default metrics, the check supports custom metrics that should be implemented using the `torch.ignite.Metric `__ API. These can be passed as a list using the alternative_metrics parameter of the check, which will override the default metrics. .. GENERATED FROM PYTHON SOURCE LINES 45-47 Imports ------- .. GENERATED FROM PYTHON SOURCE LINES 47-51 .. code-block:: default from deepchecks.vision.checks.performance import ClassPerformance from deepchecks.vision.datasets.classification import mnist .. GENERATED FROM PYTHON SOURCE LINES 52-56 Classification Performance Report ================================= Generate data and model: ------------------------ .. GENERATED FROM PYTHON SOURCE LINES 56-62 .. code-block:: default mnist_model = mnist.load_model() train_ds = mnist.load_dataset(train=True, object_type='VisionData') test_ds = mnist.load_dataset(train=False, object_type='VisionData') .. GENERATED FROM PYTHON SOURCE LINES 63-65 Run the check ------------- .. GENERATED FROM PYTHON SOURCE LINES 65-69 .. code-block:: default check = ClassPerformance() check.run(train_ds, test_ds, mnist_model) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Class Performance

Summarize given metrics on a dataset and model.

Additional Outputs

Note - data sampling: Running on 10000 train data samples out of 60000. Sample size can be controlled with the "n_samples" parameter.



.. GENERATED FROM PYTHON SOURCE LINES 70-76 Object Detection Class Performance ================================== For object detection tasks - the default metric that is being calculated it the Average Precision. The definition of the Average Precision is identical to how the COCO dataset defined it - mean of the average precision per class, over the range [0.5, 0.95, 0.05] of IoU thresholds. .. GENERATED FROM PYTHON SOURCE LINES 76-79 .. code-block:: default from deepchecks.vision.datasets.detection import coco .. GENERATED FROM PYTHON SOURCE LINES 80-84 Generate Data and Model ----------------------- We generate a sample dataset of 128 images from the `COCO dataset `__, and using the `YOLOv5 model `__. .. GENERATED FROM PYTHON SOURCE LINES 84-90 .. code-block:: default yolo = coco.load_model(pretrained=True) train_ds = coco.load_dataset(train=True, object_type='VisionData') test_ds = coco.load_dataset(train=False, object_type='VisionData') .. GENERATED FROM PYTHON SOURCE LINES 91-93 Run the check ------------- .. GENERATED FROM PYTHON SOURCE LINES 93-97 .. code-block:: default check = ClassPerformance(show_only='best') check.run(train_ds, test_ds, yolo) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Class Performance

Summarize given metrics on a dataset and model.

Additional Outputs


.. GENERATED FROM PYTHON SOURCE LINES 98-103 Define a Condition ================== We can also define a condition to validate that our model performance is above a certain threshold. The condition is defined as a function that takes the results of the check as input and returns a ConditionResult object. .. GENERATED FROM PYTHON SOURCE LINES 103-109 .. code-block:: default check = ClassPerformance(show_only='worst') check.add_condition_test_performance_not_less_than(0.2) result = check.run(train_ds, test_ds, yolo) result .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Validating Input: 0%| | 0/1 [00:00

Class Performance

Summarize given metrics on a dataset and model.

Conditions Summary
Status Condition More Info
Scores are not less than 0.2 Found metrics with scores below threshold: [{'Class Name': 'sink', 'Metric': 'Average Recall', 'Value': 0.3}, {'Class Name': 'kite', 'Metric': 'Average Recall', 'Value': 0.275}, {'Class Name': 'spoon', 'Metric': 'Average Precision', 'Value': 0.2524752475247524}, {'Class Name': 'spoon', 'Metric': 'Average Recall', 'Value': 0.25}, {'Class Name': 'sports ball', 'Metric': 'Average Recall', 'Value': 0.22000000000000003}, {'Class Name': 'bottle', 'Metric': 'Average Precision', 'Value': 0.20792079207920783}, {'Class Name': 'refrigerator', 'Metric': 'Average Precision', 'Value': 0.20198019801980194}, {'Class Name': 'boat', 'Metric': 'Average Recall', 'Value': 0.2}, {'Class Name': 'refrigerator', 'Metric': 'Average Recall', 'Value': 0.2}, {'Class Name': 'bottle', 'Metric': 'Average Recall', 'Value': 0.19999999999999998}, {'Class Name': 'boat', 'Metric': 'Average Precision', 'Value': 0.18514851485148517}, {'Class Name': 'sports ball', 'Metric': 'Average Precision', 'Value': 0.15577557755775578}, {'Class Name': 'sink', 'Metric': 'Average Precision', 'Value': 0.15148514851485145}, {'Class Name': 'car', 'Metric': 'Average Recall', 'Value': 0.14705882352941177}, {'Class Name': 'book', 'Metric': 'Average Recall', 'Value': 0.12916666666666665}, {'Class Name': 'kite', 'Metric': 'Average Precision', 'Value': 0.1267326732673267}, {'Class Name': 'car', 'Metric': 'Average Precision', 'Value': 0.11146628948609147}, {'Class Name': 'bicycle', 'Metric': 'Average Precision', 'Value': 0.10396039603960391}, {'Class Name': 'cell phone', 'Metric': 'Average Precision', 'Value': 0.10198019801980196}, {'Class Name': 'bicycle', 'Metric': 'Average Recall', 'Value': 0.1}, {'Class Name': 'cell phone', 'Metric': 'Average Recall', 'Value': 0.09999999999999998}, {'Class Name': 'book', 'Metric': 'Average Precision', 'Value': 0.09405940594059405}, {'Class Name': 'handbag', 'Metric': 'Average Precision', 'Value': 0.038613861386138607}, {'Class Name': 'handbag', 'Metric': 'Average Recall', 'Value': 0.0375}, {'Class Name': 'truck', 'Metric': 'Average Precision', 'Value': 0.0}, {'Class Name': 'traffic light', 'Metric': 'Average Precision', 'Value': 0.0}, {'Class Name': 'baseball bat', 'Metric': 'Average Precision', 'Value': 0.0}, {'Class Name': 'fork', 'Metric': 'Average Precision', 'Value': 0.0}, {'Class Name': 'knife', 'Metric': 'Average Precision', 'Value': 0.0}, {'Class Name': 'laptop', 'Metric': 'Average Precision', 'Value': 0.0}, {'Class Name': 'mouse', 'Metric': 'Average Precision', 'Value': 0.0}, {'Class Name': 'oven', 'Metric': 'Average Precision', 'Value': 0.0}, {'Class Name': 'truck', 'Metric': 'Average Recall', 'Value': 0.0}, {'Class Name': 'traffic light', 'Metric': 'Average Recall', 'Value': 0.0}, {'Class Name': 'baseball bat', 'Metric': 'Average Recall', 'Value': 0.0}, {'Class Name': 'fork', 'Metric': 'Average Recall', 'Value': 0.0}, {'Class Name': 'knife', 'Metric': 'Average Recall', 'Value': 0.0}, {'Class Name': 'laptop', 'Metric': 'Average Recall', 'Value': 0.0}, {'Class Na...
Additional Outputs


.. GENERATED FROM PYTHON SOURCE LINES 110-111 We detected that for several classes our model performance is below the threshold. .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 1 minutes 11.675 seconds) .. _sphx_glr_download_checks_gallery_vision_performance_plot_class_performance.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_class_performance.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_class_performance.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_