model_evaluation#
Module contains checks of model evaluation.
Classes
Check for overfit caused by using too many iterations in a gradient boosted model. |
|
Calculate the calibration curve with brier score for each class. |
|
Calculate the confusion matrix of the model on the given dataset. |
|
Measure model average inference time (in seconds) per sample. |
|
Summarize given model parameters. |
|
Summarize performance scores for multiple models on test datasets. |
|
Summarize given model performance on the train and test datasets based on selected scorers. |
|
Check for systematic error and abnormal shape in the regression error distribution. |
|
Check the regression systematic error. |
|
Calculate the ROC curve for each class. |
|
Display performance score segmented by 2 top (or given) features in a heatmap. |
|
Compare given model score to simple model score (according to given model type). |
|
Calculate prediction drift between train dataset and test dataset, using statistical measures. |
|
Search for segments with low performance scores. |
|
Detect features that are nearly unused by the model. |
|
Summarize given model performance on the train and test datasets based on selected scorers. |
|
Check for performance differences between subgroups of a feature, optionally accounting for a control variable. |