RobustnessReport#

class RobustnessReport[source]#

Compare performance of model on original dataset and augmented dataset.

Parameters
alternative_metricsDict[str, Metric], default: None

A dictionary of metrics, where the key is the metric name and the value is an ignite.Metric object whose score should be used. If None are given, use the default metrics.

augmentationsList, default: None

A list of augmentations to test on the data. If none are given default augmentations are used. Supported augmentations are of albumentations and imgaug.

__init__(alternative_metrics: Optional[Dict[str, Metric]] = None, augmentations: Optional[List] = None, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

RobustnessReport.add_condition(name, ...)

Add new condition function to the check.

RobustnessReport.add_condition_degradation_not_greater_than([ratio])

Add condition which validates augmentations doesn't degrade the model metrics by given amount.

RobustnessReport.clean_conditions()

Remove all conditions from this check instance.

RobustnessReport.compute(context, dataset_kind)

Run check.

RobustnessReport.conditions_decision(result)

Run conditions on given result.

RobustnessReport.config()

Return check configuration (conditions' configuration not yet supported).

RobustnessReport.from_config(conf)

Return check object from a CheckConfig object.

RobustnessReport.initialize_run(context, ...)

Initialize the metrics for the check, and validate task type is relevant.

RobustnessReport.metadata([with_doc_link])

Return check metadata.

RobustnessReport.name()

Name of class in split camel case.

RobustnessReport.params([show_defaults])

Return parameters to show when printing the check.

RobustnessReport.remove_condition(index)

Remove given condition by index.

RobustnessReport.run(dataset[, model, ...])

Run check.

RobustnessReport.update(context, batch, ...)

Accumulates batch data into the metrics.

Examples#