WeakSegmentsPerformance#

class WeakSegmentsPerformance[source]#

Search for segments with low performance scores.

The check is designed to help you easily identify weak spots of your model and provide a deepdive analysis into its performance on different segments of your data. Specifically, it is designed to help you identify the model weakest segments in the data distribution for further improvement and visibility purposes.

In order to achieve this, the check trains several simple tree based models which try to predict the error of the user provided model on the dataset. The relevant segments are detected by analyzing the different leafs of the trained trees. Parameters ———- columns : Union[Hashable, List[Hashable]] , default: None

Columns to check, if none are given checks all columns except ignored ones.

ignore_columnsUnion[Hashable, List[Hashable]] , default: None

Columns to ignore, if none given checks based on columns variable

n_top_featuresint , default: 5

Number of features to use for segment search. Top columns are selected based on feature importance.

segment_minimum_size_ratio: float , default: 0.05

Minimum size ratio for segments. Will only search for segments of size >= segment_minimum_size_ratio * data_size.

alternative_scorerTuple[str, Union[str, Callable]] , default: None

Scorer to use as performance measure, either function or sklearn scorer name. If None, a default scorer (per the model type) will be used.

loss_per_sample: Union[np.array, pd.Series, None], default: None

Loss per sample used to detect relevant weak segments. If pd.Series the indexes should be similar to those in the dataset object provide, if np.array the order should be based on the index order of the dataset object and if None the check calculates loss per sample by via log loss for classification and MSE for regression.

n_samplesint , default: 10_000

number of samples to use for this check.

n_to_showint , default: 3

number of segments with the weakest performance to show.

categorical_aggregation_thresholdfloat , default: 0.05

In each categorical column, categories with frequency below threshold will be merged into “Other” category.

random_stateint, default: 42

random seed for all check internals.

__init__(columns: Optional[Union[Hashable, List[Hashable]]] = None, ignore_columns: Optional[Union[Hashable, List[Hashable]]] = None, n_top_features: int = 5, segment_minimum_size_ratio: float = 0.05, alternative_scorer: Optional[Dict[str, Callable]] = None, loss_per_sample: Optional[Union[ndarray, Series]] = None, n_samples: int = 10000, categorical_aggregation_threshold: float = 0.05, n_to_show: int = 3, random_state: int = 42, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

WeakSegmentsPerformance.add_condition(name, ...)

Add new condition function to the check.

WeakSegmentsPerformance.add_condition_segments_relative_performance_greater_than([...])

Add condition - check that the score of the weakest segment is greater than supplied relative threshold.

WeakSegmentsPerformance.clean_conditions()

Remove all conditions from this check instance.

WeakSegmentsPerformance.conditions_decision(result)

Run conditions on given result.

WeakSegmentsPerformance.config([include_version])

Return checks instance config.

WeakSegmentsPerformance.from_config(conf[, ...])

Return check object from a CheckConfig object.

WeakSegmentsPerformance.from_json(conf[, ...])

Deserialize check instance from JSON string.

WeakSegmentsPerformance.metadata([with_doc_link])

Return check metadata.

WeakSegmentsPerformance.name()

Name of class in split camel case.

WeakSegmentsPerformance.params([show_defaults])

Return parameters to show when printing the check.

WeakSegmentsPerformance.remove_condition(index)

Remove given condition by index.

WeakSegmentsPerformance.run(dataset[, ...])

Run check.

WeakSegmentsPerformance.run_logic(context, ...)

Run check.

WeakSegmentsPerformance.to_json([indent])

Serialize check instance to JSON string.

Examples#