.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "checks_gallery/tabular/performance/plot_performance_report.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_checks_gallery_tabular_performance_plot_performance_report.py: Performance Report ****************** This notebooks provides an overview for using and understanding performance report check. **Structure:** * `What is the purpose of the check? <#what-is-the-purpose-of-the-check>`__ * `Generate data & model <#generate-data-model>`__ * `Run the check <#run-the-check>`__ * `Define a condition <#define-a-condition>`__ * `Using alternative scorers <#using-alternative-scorers>`__ What is the purpose of the check? ================================= This check helps you compare your model's performance between two datasets. The default metric that are used are F1, Percision, and Recall for Classification and Negative Root Mean Square Error, Negative Mean Absolute Error, and R2 for Regression. RMSE and MAE Scorers are negative because we subscribe to the sklearn convention of defining scoring functions. `See scorers documentation `_ .. GENERATED FROM PYTHON SOURCE LINES 26-28 Generate data & model ===================== .. GENERATED FROM PYTHON SOURCE LINES 28-35 .. code-block:: default from deepchecks.tabular.datasets.classification.phishing import ( load_data, load_fitted_model) train_dataset, test_dataset = load_data() model = load_fitted_model() .. GENERATED FROM PYTHON SOURCE LINES 36-38 Run the check ============= .. GENERATED FROM PYTHON SOURCE LINES 38-44 .. code-block:: default from deepchecks.tabular.checks.performance import PerformanceReport check = PerformanceReport() check.run(train_dataset, test_dataset, model) .. raw:: html

Performance Report

Summarize given scores on a dataset and model.

Additional Outputs


.. GENERATED FROM PYTHON SOURCE LINES 45-51 Define a condition ================== We can define on our check a condition that will validate that our model doesn't degrade on new data. Let's add a condition to the check and see what happens when it fails: .. GENERATED FROM PYTHON SOURCE LINES 51-57 .. code-block:: default check = PerformanceReport() check.add_condition_train_test_relative_degradation_not_greater_than(0.05) result = check.run(train_dataset, test_dataset, model) result.show(show_additional_outputs=False) .. raw:: html
Performance Report


.. GENERATED FROM PYTHON SOURCE LINES 58-59 We detected that for class "0" our the Precision result is degraded by more than 5% .. GENERATED FROM PYTHON SOURCE LINES 61-64 Using alternative scorers ========================= We can define alternative scorers that are not run by default: .. GENERATED FROM PYTHON SOURCE LINES 64-71 .. code-block:: default from sklearn.metrics import fbeta_score, make_scorer fbeta_scorer = make_scorer(fbeta_score, labels=[0, 1], average=None, beta=0.2) check = PerformanceReport(alternative_scorers={'my scorer': fbeta_scorer}) check.run(train_dataset, test_dataset, model) .. raw:: html

Performance Report

Summarize given scores on a dataset and model.

Additional Outputs


.. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 7.770 seconds) .. _sphx_glr_download_checks_gallery_tabular_performance_plot_performance_report.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_performance_report.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_performance_report.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_