ClassPerformance#

class ClassPerformance[source]#

Summarize given metrics on a dataset and model.

Parameters
scorers: Union[Dict[str, Union[Callable, str]], List[Any]] , default: None

Scorers to override the default scorers (metrics), find more about the supported formats at https://docs.deepchecks.com/stable/user-guide/general/metrics_guide.html

n_to_showint, default: 20

Number of classes to show in the report. If None, show all classes.

show_onlystr, default: ‘largest’

Specify which classes to show in the report. Can be one of the following: - ‘largest’: Show the largest classes. - ‘smallest’: Show the smallest classes. - ‘random’: Show random classes. - ‘best’: Show the classes with the highest score. - ‘worst’: Show the classes with the lowest score.

metric_to_show_bystr, default: None

Specify the metric to sort the results by. Relevant only when show_only is ‘best’ or ‘worst’. If None, sorting by the first metric in the default metrics list.

class_list_to_show: List[int], default: None

Specify the list of classes to show in the report. If specified, n_to_show, show_only and metric_to_show_by are ignored.

n_samplesOptional[int] , default10000

Number of samples to use for the check. If None, all samples will be used.

__init__(scorers: Optional[Union[Dict[str, Union[Callable, str]], List[Any]]] = None, n_to_show: int = 20, show_only: str = 'largest', metric_to_show_by: Optional[str] = None, class_list_to_show: Optional[List[int]] = None, n_samples: Optional[int] = 10000, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

ClassPerformance.add_condition(name, ...)

Add new condition function to the check.

ClassPerformance.add_condition_class_performance_imbalance_ratio_less_than([...])

Add condition - relative ratio difference between highest-class and lowest-class is less than threshold.

ClassPerformance.add_condition_test_performance_greater_than(...)

Add condition - metric scores are greater than the threshold.

ClassPerformance.add_condition_train_test_relative_degradation_less_than([...])

Add condition - test performance is not degraded by more than given percentage in train.

ClassPerformance.clean_conditions()

Remove all conditions from this check instance.

ClassPerformance.compute(context)

Compute the metric result using the metrics compute method and create display.

ClassPerformance.conditions_decision(result)

Run conditions on given result.

ClassPerformance.config([include_version, ...])

Return check configuration (conditions' configuration not yet supported).

ClassPerformance.from_config(conf[, ...])

Return check object from a CheckConfig object.

ClassPerformance.from_json(conf[, ...])

Deserialize check instance from JSON string.

ClassPerformance.initialize_run(context)

Initialize run by creating the _state member with metrics for train and test.

ClassPerformance.metadata([with_doc_link])

Return check metadata.

ClassPerformance.name()

Name of class in split camel case.

ClassPerformance.params([show_defaults])

Return parameters to show when printing the check.

ClassPerformance.remove_condition(index)

Remove given condition by index.

ClassPerformance.run(train_dataset, test_dataset)

Run check.

ClassPerformance.to_json([indent, ...])

Serialize check instance to JSON string.

ClassPerformance.update(context, batch, ...)

Update the metrics by passing the batch to ignite metric update method.

Examples#