PerformanceReport#

class PerformanceReport[source]#

Summarize given scores on a dataset and model.

Parameters
alternative_scorersDict[str, Callable], default: None

An optional dictionary of scorer name to scorer functions. If none given, using default scorers

Notes

Scorers are a convention of sklearn to evaluate a model. See scorers documentation A scorer is a function which accepts (model, X, y_true) and returns a float result which is the score. For every scorer higher scores are better than lower scores.

You can create a scorer out of existing sklearn metrics:

from sklearn.metrics import roc_auc_score, make_scorer

training_labels = [1, 2, 3]
auc_scorer = make_scorer(roc_auc_score, labels=training_labels, multi_class='ovr')
# Note that the labels parameter is required for multi-class classification in metrics like roc_auc_score or
# log_loss that use the predict_proba function of the model, in case that not all labels are present in the test
# set.

Or you can implement your own:

from sklearn.metrics import make_scorer

def my_mse(y_true, y_pred):
    return (y_true - y_pred) ** 2

# Mark greater_is_better=False, since scorers always suppose to return
# value to maximize.
my_mse_scorer = make_scorer(my_mse, greater_is_better=False)
__init__(alternative_scorers: Optional[Dict[str, Callable]] = None, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

PerformanceReport.add_condition(name, ...)

Add new condition function to the check.

PerformanceReport.add_condition_class_performance_imbalance_ratio_not_greater_than([...])

Add condition.

PerformanceReport.add_condition_test_performance_not_less_than(...)

Add condition - metric scores are not less than given score.

PerformanceReport.add_condition_train_test_relative_degradation_not_greater_than([...])

Add condition that will check that test performance is not degraded by more than given percentage in train.

PerformanceReport.clean_conditions()

Remove all conditions from this check instance.

PerformanceReport.conditions_decision(result)

Run conditions on given result.

PerformanceReport.finalize_check_result(...)

Finalize the check result by adding the check instance and processing the conditions.

PerformanceReport.metadata([with_doc_link])

Return check metadata.

PerformanceReport.name()

Name of class in split camel case.

PerformanceReport.params([show_defaults])

Return parameters to show when printing the check.

PerformanceReport.remove_condition(index)

Remove given condition by index.

PerformanceReport.run(train_dataset, ...[, ...])

Run check.

PerformanceReport.run_logic(context)

Run check.

Examples#