SingleDatasetPerformance#

class SingleDatasetPerformance[source]#

Summarize given model performance on the train and test datasets based on selected scorers.

Parameters
scorersUnion[List[str], Dict[str, Union[str, Callable]]], default: None

List of scorers to use. If None, use default scorers. Scorers can be supplied as a list of scorer names or as a dictionary of names and functions.

__init__(scorers: Optional[Union[List[str], Dict[str, Union[str, Callable]]]] = None, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

SingleDatasetPerformance.add_condition(name, ...)

Add new condition function to the check.

SingleDatasetPerformance.add_condition_greater_than(...)

Add condition - the selected metrics scores are greater than the threshold.

SingleDatasetPerformance.clean_conditions()

Remove all conditions from this check instance.

SingleDatasetPerformance.conditions_decision(result)

Run conditions on given result.

SingleDatasetPerformance.config()

Return check configuration (conditions' configuration not yet supported).

SingleDatasetPerformance.from_config(conf)

Return check object from a CheckConfig object.

SingleDatasetPerformance.metadata([...])

Return check metadata.

SingleDatasetPerformance.name()

Name of class in split camel case.

SingleDatasetPerformance.params([show_defaults])

Return parameters to show when printing the check.

SingleDatasetPerformance.reduce_output(...)

Return the values of the metrics for the dataset provided in a {metric: value} format.

SingleDatasetPerformance.remove_condition(index)

Remove given condition by index.

SingleDatasetPerformance.run(dataset[, ...])

Run check.

SingleDatasetPerformance.run_logic(context, ...)

Run check.

Examples#