class ClassPerformance[source]#

Summarize given metrics on a dataset and model.

alternative_metricsUnion[Dict[str, Union[Metric,str]], List[str]], default: None

A dictionary of metrics, where the key is the metric name and the value is an ignite.Metric object whose score should be used. If None are given, use the default metrics.

n_to_showint, default: 20

Number of classes to show in the report. If None, show all classes.

show_onlystr, default: ‘largest’

Specify which classes to show in the report. Can be one of the following: - ‘largest’: Show the largest classes. - ‘smallest’: Show the smallest classes. - ‘random’: Show random classes. - ‘best’: Show the classes with the highest score. - ‘worst’: Show the classes with the lowest score.

metric_to_show_bystr, default: None

Specify the metric to sort the results by. Relevant only when show_only is ‘best’ or ‘worst’. If None, sorting by the first metric in the default metrics list.

class_list_to_show: List[int], default: None

Specify the list of classes to show in the report. If specified, n_to_show, show_only and metric_to_show_by are ignored.

__init__(alternative_metrics: Optional[Union[Dict[str, Union[Metric, str]], List[str]]] = None, n_to_show: int = 20, show_only: str = 'largest', metric_to_show_by: Optional[str] = None, class_list_to_show: Optional[List[int]] = None, **kwargs)[source]#
__new__(*args, **kwargs)#


ClassPerformance.add_condition(name, ...)

Add new condition function to the check.


Add condition - relative ratio difference between highest-class and lowest-class is less than threshold.


Add condition - metric scores are greater than the threshold.


Add condition - test performance is not degraded by more than given percentage in train.


Remove all conditions from this check instance.


Compute the metric result using the ignite metrics compute method and create display.


Run conditions on given result.


Return check configuration (conditions' configuration not yet supported).

ClassPerformance.from_config(conf[, ...])

Return check object from a CheckConfig object.

ClassPerformance.from_json(conf[, ...])

Deserialize check instance from JSON string.


Initialize run by creating the _state member with metrics for train and test.


Return check metadata.

Name of class in split camel case.


Return parameters to show when printing the check.


Remove given condition by index., test_dataset)

Run check.


Serialize check instance to JSON string.

ClassPerformance.update(context, batch, ...)

Update the metrics by passing the batch to ignite metric update method.