TrainTestPerformance#

class TrainTestPerformance[source]#

Summarize given model performance on the train and test datasets based on selected scorers.

Parameters
scorersUnion[List[str], Dict[str, Union[str, Callable]]], default: None

List of scorers to use. If None, use default scorers. Scorers can be supplied as a list of scorer names or as a dictionary of names and functions.

reduce: Union[Callable, str], default: ‘mean’

An optional argument only used for the reduce_output function when using per-class scorers.

Notes

Scorers are a convention of sklearn to evaluate a model. See scorers documentation A scorer is a function which accepts (model, X, y_true) and returns a float result which is the score. For every scorer higher scores are better than lower scores.

You can create a scorer out of existing sklearn metrics:

from sklearn.metrics import roc_auc_score, make_scorer

training_labels = [1, 2, 3]
auc_scorer = make_scorer(roc_auc_score, labels=training_labels, multi_class='ovr')
# Note that the labels parameter is required for multi-class classification in metrics like roc_auc_score or
# log_loss that use the predict_proba function of the model, in case that not all labels are present in the test
# set.

Or you can implement your own:

from sklearn.metrics import make_scorer

def my_mse(y_true, y_pred):
    return (y_true - y_pred) ** 2

# Mark greater_is_better=False, since scorers always suppose to return
# value to maximize.
my_mse_scorer = make_scorer(my_mse, greater_is_better=False)
__init__(scorers: Optional[Union[List[str], Dict[str, Union[str, Callable]]]] = None, reduce: Union[Callable, str] = 'mean', **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

TrainTestPerformance.add_condition(name, ...)

Add new condition function to the check.

TrainTestPerformance.add_condition_class_performance_imbalance_ratio_less_than([...])

Add condition - relative ratio difference between highest-class and lowest-class is less than threshold.

TrainTestPerformance.add_condition_test_performance_greater_than(...)

Add condition - metric scores are greater than the threshold.

TrainTestPerformance.add_condition_train_test_relative_degradation_less_than([...])

Add condition - test performance is not degraded by more than given percentage in train.

TrainTestPerformance.clean_conditions()

Remove all conditions from this check instance.

TrainTestPerformance.conditions_decision(result)

Run conditions on given result.

TrainTestPerformance.config()

Return check configuration (conditions' configuration not yet supported).

TrainTestPerformance.from_config(conf)

Return check object from a CheckConfig object.

TrainTestPerformance.metadata([with_doc_link])

Return check metadata.

TrainTestPerformance.name()

Name of class in split camel case.

TrainTestPerformance.params([show_defaults])

Return parameters to show when printing the check.

TrainTestPerformance.reduce_output(check_result)

Return the values of the metrics for the test dataset in {metric: value} format.

TrainTestPerformance.remove_condition(index)

Remove given condition by index.

TrainTestPerformance.run(train_dataset, ...)

Run check.

TrainTestPerformance.run_logic(context)

Run check.

Examples#