TrainTestPerformance#

class TrainTestPerformance[source]#

Summarize given model performance on the train and test datasets based on selected scorers.

Parameters
scorers: Union[Mapping[str, Union[str, Callable]], List[str]], default: None

Scorers to override the default scorers, find more about the supported formats at https://docs.deepchecks.com/stable/user-guide/general/metrics_guide.html

n_samplesint , default: 1_000_000

number of samples to use for this check.

random_stateint, default: 42

random seed for all check internals.

Notes

Scorers are a convention of sklearn to evaluate a model. See scorers documentation A scorer is a function which accepts (model, X, y_true) and returns a float result which is the score. For every scorer higher scores are better than lower scores.

You can create a scorer out of existing sklearn metrics:

from sklearn.metrics import roc_auc_score, make_scorer

training_labels = [1, 2, 3]
auc_scorer = make_scorer(roc_auc_score, labels=training_labels, multi_class='ovr')
# Note that the labels parameter is required for multi-class classification in metrics like roc_auc_score or
# log_loss that use the predict_proba function of the model, in case that not all labels are present in the test
# set.

Or you can implement your own:

from sklearn.metrics import make_scorer

def my_mse(y_true, y_pred):
    return (y_true - y_pred) ** 2

# Mark greater_is_better=False, since scorers always suppose to return
# value to maximize.
my_mse_scorer = make_scorer(my_mse, greater_is_better=False)
__init__(scorers: Optional[Union[Mapping[str, Union[str, Callable]], List[str]]] = None, n_samples: int = 1000000, random_state: int = 42, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

TrainTestPerformance.add_condition(name, ...)

Add new condition function to the check.

TrainTestPerformance.add_condition_class_performance_imbalance_ratio_less_than(score)

Add condition - relative ratio difference between highest-class and lowest-class is less than threshold.

TrainTestPerformance.add_condition_test_performance_greater_than(...)

Add condition - metric scores are greater than the threshold.

TrainTestPerformance.add_condition_train_test_relative_degradation_less_than([...])

Add condition - test performance is not degraded by more than given percentage in train.

TrainTestPerformance.clean_conditions()

Remove all conditions from this check instance.

TrainTestPerformance.conditions_decision(result)

Run conditions on given result.

TrainTestPerformance.config([...])

Return check configuration.

TrainTestPerformance.from_config(conf[, ...])

Return check object from a CheckConfig object.

TrainTestPerformance.from_json(conf[, ...])

Deserialize check instance from JSON string.

TrainTestPerformance.metadata([with_doc_link])

Return check metadata.

TrainTestPerformance.name()

Name of class in split camel case.

TrainTestPerformance.params([show_defaults])

Return parameters to show when printing the check.

TrainTestPerformance.remove_condition(index)

Remove given condition by index.

TrainTestPerformance.run(train_dataset, ...)

Run check.

TrainTestPerformance.run_logic(context)

Run check.

TrainTestPerformance.to_json([indent, ...])

Serialize check instance to JSON string.

Examples#