.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "vision/auto_checks/model_evaluation/plot_mean_average_precision_report.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_vision_auto_checks_model_evaluation_plot_mean_average_precision_report.py: .. _vision__mean_average_precision_report: Mean Average Precision Report ***************************** This notebooks provides an overview for using and understanding the mean average precision report check. **Structure:** * `What is the purpose of the check? <#what-is-the-purpose-of-the-check>`__ * `Generate Dataset <#generate-dataset>`__ * `Run the check <#run-the-check>`__ * `Define a condition <#define-a-condition>`__ What Is the Purpose of the Check? ================================= The Mean Average Precision Report evaluates the `mAP metric `__ on the given model and data, plots the AP on graph, and returns the mAP values per bounding box size category (small, medium, large). This check only works on the Object Detection task. .. GENERATED FROM PYTHON SOURCE LINES 28-39 Generate Dataset ================ We generate a sample dataset of 128 images from the `COCO dataset `__, and using the `YOLOv5 model `__. .. note:: In this example, we use the pytorch version of the coco dataset and model. In order to run this example using tensorflow, please change the import statements to:: from deepchecks.vision.datasets.detection import coco_tensorflow as coco .. GENERATED FROM PYTHON SOURCE LINES 39-45 .. code-block:: default from deepchecks.vision.checks import MeanAveragePrecisionReport from deepchecks.vision.datasets.detection import coco_torch as coco test_ds = coco.load_dataset(train=False, object_type='VisionData') .. rst-class:: sphx-glr-script-out .. code-block:: none You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. .. GENERATED FROM PYTHON SOURCE LINES 46-48 Run the check ============= .. GENERATED FROM PYTHON SOURCE LINES 48-53 .. code-block:: default check = MeanAveragePrecisionReport() result = check.run(test_ds) result .. rst-class:: sphx-glr-script-out .. code-block:: none Processing Batches: | | 0/1 [Time: 00:00] Processing Batches: |█████| 1/1 [Time: 00:00] Processing Batches: |█████| 1/1 [Time: 00:00] Computing Check: | | 0/1 [Time: 00:00] Computing Check: |█████| 1/1 [Time: 00:00] Computing Check: |█████| 1/1 [Time: 00:00] .. raw:: html
Mean Average Precision Report


.. GENERATED FROM PYTHON SOURCE LINES 54-55 To display the results in an IDE like PyCharm, you can use the following code: .. GENERATED FROM PYTHON SOURCE LINES 55-57 .. code-block:: default # result.show_in_window() .. GENERATED FROM PYTHON SOURCE LINES 58-59 The result will be displayed in a new window. .. GENERATED FROM PYTHON SOURCE LINES 62-67 Observe the check’s output -------------------------- The result value is a dataframe that has the Mean Average Precision score for different bounding box area sizes. We report the mAP for different IoU thresholds: 0.5, 0.75 and an average of mAP values for IoU thresholds between 0.5 and 0.9 (with a jump size of 0.05). .. GENERATED FROM PYTHON SOURCE LINES 67-70 .. code-block:: default result.value .. raw:: html
mAP@[.50::.95] (avg.%) mAP@.50 (%) mAP@.75 (%)
Area size
All 0.409436 0.566673 0.425339
Small (area < 32^2) 0.212816 0.342429 0.212868
Medium (32^2 < area < 96^2) 0.383089 0.600228 0.349863
Large (area < 96^2) 0.541146 0.674493 0.585378


.. GENERATED FROM PYTHON SOURCE LINES 71-75 Define a condition ================== We can define a condition that checks whether our model's mean average precision score is not less than a given threshold for all bounding box sizes. .. GENERATED FROM PYTHON SOURCE LINES 75-79 .. code-block:: default check = MeanAveragePrecisionReport().add_condition_average_mean_average_precision_greater_than(0.4) result = check.run(test_ds) result.show(show_additional_outputs=False) .. rst-class:: sphx-glr-script-out .. code-block:: none Processing Batches: | | 0/1 [Time: 00:00] Processing Batches: |█████| 1/1 [Time: 00:00] Processing Batches: |█████| 1/1 [Time: 00:00] Computing Check: | | 0/1 [Time: 00:00] Computing Check: |█████| 1/1 [Time: 00:00] Computing Check: |█████| 1/1 [Time: 00:00] .. raw:: html
Mean Average Precision Report


.. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 2.536 seconds) .. _sphx_glr_download_vision_auto_checks_model_evaluation_plot_mean_average_precision_report.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_mean_average_precision_report.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_mean_average_precision_report.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_