.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "nlp/auto_checks/model_evaluation/plot_property_segments_performance.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_nlp_auto_checks_model_evaluation_plot_property_segments_performance.py: .. _nlp__property_segments_performance: Property Segments Performance ***************************** This notebook provides an overview for using and understanding the property segment performance check. **Structure:** * `What is the purpose of the check? <#what-is-the-purpose-of-the-check>`__ * `Automatically detecting weak segments <#automatically-detecting-weak-segments>`__ * `Generate data & model <#generate-data-model>`__ * `Run the check <#run-the-check>`__ * `Define a condition <#define-a-condition>`__ What is the purpose of the check? ================================= The check is designed to help you easily identify the model's weakest segments based on the provided :ref:`properties `. In addition, it enables to provide a sublist of the metadata columns, thus limiting the check to search in interesting subspaces. Automatically detecting weak segments ===================================== The check contains several steps: #. We calculate loss for each sample in the dataset using the provided model via either log-loss or MSE according to the task type. #. We train multiple simple tree based models, each one is trained using exactly two properties (out of the ones selected above) to predict the per sample error calculated before. #. We extract the corresponding data samples for each of the leaves in each of the trees (data segments) and calculate the model performance on them. For the weakest data segments detected we also calculate the model's performance on data segments surrounding them. .. GENERATED FROM PYTHON SOURCE LINES 43-45 Generate data & model ===================== .. GENERATED FROM PYTHON SOURCE LINES 45-53 .. code-block:: default from deepchecks.nlp.datasets.classification.tweet_emotion import load_data, load_precalculated_predictions _, test_dataset = load_data(data_format='TextData') _, test_probas = load_precalculated_predictions(pred_format='probabilities') test_dataset.properties.head(3) .. raw:: html
Text Length Average Word Length Max Word Length % Special Characters Language Sentiment Subjectivity Toxicity Fluency Formality
0 104 5.058824 11 0.057692 en -0.155556 0.288889 0.001683 0.896180 0.387794
1 98 6.071429 16 0.061224 en -0.250000 0.750000 0.020605 0.862289 0.224011
2 65 5.000000 11 0.015385 en -0.175000 0.950000 0.001355 0.884989 0.032200


.. GENERATED FROM PYTHON SOURCE LINES 54-80 Run the check ============= The check has several key parameters (that are all optional) that affect the behavior of the check and especially its output. ``properties / ignore_properties``: Controls which properties should be searched for weak segments. By default, uses all properties data provided. ``alternative_scorer``: Determines the metric to be used as the performance measurement of the model on different segments. It is important to select a metric that is relevant to the data domain and task you are performing. For additional information on scorers and how to use them see :ref:`Metrics Guide `. ``segment_minimum_size_ratio``: Determines the minimum size of segments that are of interest. The check will return data segments that contain at least this fraction of the total data samples. It is recommended to try different configurations of this parameter as larger segments can be of interest even the model performance on them is superior. ``categorical_aggregation_threshold``: By default the check will combine rare categories into a single category called "Other". This parameter determines the frequency threshold for categories to be mapped into to the "other" category. ``multiple_segments_per_column``: If True, will allow the same property to be a segmenting feature in multiple segments, otherwise each property can appear in one segment at most. False by default. see :class:`API reference ` for more details. .. GENERATED FROM PYTHON SOURCE LINES 80-90 .. code-block:: default from deepchecks.nlp.checks import PropertySegmentsPerformance from sklearn.metrics import make_scorer, f1_score scorer = {'f1': make_scorer(f1_score, average='micro')} check = PropertySegmentsPerformance(alternative_scorer=scorer, segment_minimum_size_ratio=0.03) result = check.run(test_dataset, probabilities=test_probas) result.show() .. raw:: html
Property Segments Performance


.. GENERATED FROM PYTHON SOURCE LINES 91-98 Observe the check's output -------------------------- We see in the results that the check indeed found several segments on which the model performance is below average. In the heatmap display we can see model performance on the weakest segments and their environment with respect to the two features that are relevant to the segment. In order to get the full list of weak segments found we will inspect the ``result.value`` attribute. Shown below are the 3 segments with the worst performance. .. GENERATED FROM PYTHON SOURCE LINES 98-102 .. code-block:: default result.value['weak_segments_list'].head(3) .. raw:: html
f1 Score Feature1 Feature1 Range Feature2 Feature2 Range % of Data Samples in Segment
0 0.492308 Max Word Length (-inf, 9.5) Subjectivity (0.36249999701976776, 0.45724207162857056) 3.29 [11, 20, 72, 92, 114, 126, 150, 162, 178, 180,...
1 0.511628 Toxicity (-inf, 0.0007132745522540063) None 4.35 [63, 68, 87, 106, 145, 170, 171, 184, 208, 229...
2 0.583784 Sentiment (-0.44305555522441864, 0.5099999904632568) Fluency (0.8132111132144928, 0.8792168200016022) 9.35 [1, 5, 15, 56, 65, 79, 88, 93, 94, 100, 106, 1...


.. GENERATED FROM PYTHON SOURCE LINES 103-109 Define a condition ================== We can add a condition that will validate the model's performance on the weakest segment detected is above a certain threshold. A scenario where this can be useful is when we want to make sure that the model is not under performing on a subset of the data that is of interest to us. .. GENERATED FROM PYTHON SOURCE LINES 109-115 .. code-block:: default # Let's add a condition and re-run the check: check.add_condition_segments_relative_performance_greater_than(0.1) result = check.run(test_dataset, probabilities=test_probas) result.show(show_additional_outputs=False) .. raw:: html
Property Segments Performance


.. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 23.678 seconds) .. _sphx_glr_download_nlp_auto_checks_model_evaluation_plot_property_segments_performance.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_property_segments_performance.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_property_segments_performance.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_