Note
Go to the end to download the full example code
Mean Average Recall Report#
This notebooks provides an overview for using and understanding the mean average recall report check.
Structure:
What is the purpose of the check?#
The Mean Average Recall Report evaluates the mAR metric on the given model and data, and returns the mAR values per bounding box size category (small, medium, large). This check only works on the Object Detection task.
Imports#
Note
In this example, we use the pytorch version of the coco dataset and model. In order to run this example using tensorflow, please change the import statements to:
from deepchecks.vision.datasets.detection import coco_tensorflow as coco
from deepchecks.vision.checks import MeanAverageRecallReport
from deepchecks.vision.datasets.detection import coco_torch as coco
Generate Dataset#
We generate a sample dataset of 128 images from the COCO dataset, and using the YOLOv5 model.
For the label formatter - our dataset returns exactly the accepted format, so our formatting function is the simple lambda x: x function.
test_ds = coco.load_dataset(train=False, object_type='VisionData')
You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
Run the check#
check = MeanAverageRecallReport()
result = check.run(test_ds)
result
Processing Batches:
| | 0/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:00]
Computing Check:
| | 0/1 [Time: 00:00]
Computing Check:
|█████| 1/1 [Time: 00:00]
Computing Check:
|█████| 1/1 [Time: 00:00]
To display the results in an IDE like PyCharm, you can use the following code:
# result.show_in_window()
The result will be displayed in a new window.
Observe the check’s output#
The result value is a dataframe that has the average recall score per each area range and IoU.
Define a condition#
We can define a condition that checks whether our model’s average recall score is not less than a given threshold
check = MeanAverageRecallReport().add_condition_test_average_recall_greater_than(0.4)
result = check.run(test_ds)
result.show(show_additional_outputs=False)
Processing Batches:
| | 0/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:00]
Processing Batches:
|█████| 1/1 [Time: 00:00]
Computing Check:
| | 0/1 [Time: 00:00]
Computing Check:
|█████| 1/1 [Time: 00:00]
Computing Check:
|█████| 1/1 [Time: 00:00]
Total running time of the script: (0 minutes 2.383 seconds)