.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "vision/auto_checks/model_evaluation/plot_single_dataset_performance.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_vision_auto_checks_model_evaluation_plot_single_dataset_performance.py: .. _vision__single_dataset_performance: Single Dataset Performance ********************************* This notebooks provides an overview for using and understanding single dataset performance check. **Structure:** * `What Is the Purpose of the Check? <#what-is-the-purpose-of-the-check>`__ * `Generate Dataset <#generate-dataset>`__ * `Run the check <#run-the-check>`__ * `Define a condition <#define-a-condition>`__ What Is the Purpose of the Check? ================================= This check returns the results from a dict of metrics, in the format metric name: scorer, calculated for the given model dataset. The scorer should be either a sklearn scorer or a custom metric (see :ref:`metrics_user_guide` for further details). Use this check to evaluate the performance on a single vision dataset such as a test set. .. GENERATED FROM PYTHON SOURCE LINES 26-34 Generate Dataset ---------------- .. note:: In this example, we use the pytorch version of the mnist dataset and model. In order to run this example using tensorflow, please change the import statements to:: from deepchecks.vision.datasets.classification import mnist_tensorflow as mnist .. GENERATED FROM PYTHON SOURCE LINES 34-38 .. code-block:: default from deepchecks.vision.checks import SingleDatasetPerformance from deepchecks.vision.datasets.classification import mnist_torch as mnist .. GENERATED FROM PYTHON SOURCE LINES 39-42 .. code-block:: default train_ds = mnist.load_dataset(train=True, object_type='VisionData') .. rst-class:: sphx-glr-script-out .. code-block:: none You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. .. GENERATED FROM PYTHON SOURCE LINES 43-47 Run the check ------------- The check will use the default classification metrics - precision and recall. .. GENERATED FROM PYTHON SOURCE LINES 47-53 .. code-block:: default check = SingleDatasetPerformance() result = check.run(train_ds) result.show() .. rst-class:: sphx-glr-script-out .. code-block:: none Processing Batches: | | 0/1 [Time: 00:00] Processing Batches: |█████| 1/1 [Time: 00:01] Processing Batches: |█████| 1/1 [Time: 00:01] Computing Check: | | 0/1 [Time: 00:00] Computing Check: |█████| 1/1 [Time: 00:00] .. raw:: html
Single Dataset Performance


.. GENERATED FROM PYTHON SOURCE LINES 54-55 To display the results in an IDE like PyCharm, you can use the following code: .. GENERATED FROM PYTHON SOURCE LINES 55-57 .. code-block:: default # result.show_in_window() .. GENERATED FROM PYTHON SOURCE LINES 58-59 The result will be displayed in a new window. .. GENERATED FROM PYTHON SOURCE LINES 61-63 Now we will run a check with a metric different from the defaults- F-1. You can read more about setting metrics in the :ref:`Metrics Guide `. .. GENERATED FROM PYTHON SOURCE LINES 63-68 .. code-block:: default check = SingleDatasetPerformance(scorers={'f1': 'f1_per_class'}) result = check.run(train_ds) result .. rst-class:: sphx-glr-script-out .. code-block:: none Processing Batches: | | 0/1 [Time: 00:00] Processing Batches: |█████| 1/1 [Time: 00:01] Processing Batches: |█████| 1/1 [Time: 00:01] Computing Check: | | 0/1 [Time: 00:00] Computing Check: |█████| 1/1 [Time: 00:00] .. raw:: html
Single Dataset Performance


.. GENERATED FROM PYTHON SOURCE LINES 69-74 Define a Condition ================== We can define a condition to validate that our model performance score is above or below a certain threshold. The condition is defined as a function that takes the results of the check as input and returns a ConditionResult object. .. GENERATED FROM PYTHON SOURCE LINES 74-80 .. code-block:: default check = SingleDatasetPerformance() check.add_condition_greater_than(0.5) result = check.run(train_ds) result.show(show_additional_outputs=False) .. rst-class:: sphx-glr-script-out .. code-block:: none Processing Batches: | | 0/1 [Time: 00:00] Processing Batches: |█████| 1/1 [Time: 00:01] Processing Batches: |█████| 1/1 [Time: 00:01] Computing Check: | | 0/1 [Time: 00:00] Computing Check: |█████| 1/1 [Time: 00:00] .. raw:: html
Single Dataset Performance


.. GENERATED FROM PYTHON SOURCE LINES 81-83 We can also define a condition on a specific metric (or a subset of the metrics) that was passed to the check and a specific class, instead of testing all the metrics and all the classes which is the default mode. .. GENERATED FROM PYTHON SOURCE LINES 83-88 .. code-block:: default check = SingleDatasetPerformance() check.add_condition_greater_than(0.8, metrics=['Precision'], class_mode='3') result = check.run(train_ds) result.show(show_additional_outputs=False) .. rst-class:: sphx-glr-script-out .. code-block:: none Processing Batches: | | 0/1 [Time: 00:00] Processing Batches: |█████| 1/1 [Time: 00:01] Processing Batches: |█████| 1/1 [Time: 00:01] Computing Check: | | 0/1 [Time: 00:00] Computing Check: |█████| 1/1 [Time: 00:00] .. raw:: html
Single Dataset Performance


.. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 7.056 seconds) .. _sphx_glr_download_vision_auto_checks_model_evaluation_plot_single_dataset_performance.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_single_dataset_performance.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_single_dataset_performance.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_