Simple Model Comparison#

This notebooks provides an overview for using and understanding simple model comparison check.

Structure:

What Is the Purpose of the Check?#

This check compares your current model to a “simple model”, which is a model designed to produce the best performance achievable using very simple rules, such as “always predict the most common class”. The simple model is used as a baseline model; If your model achieves less or similar score to the simple model, this is an indicator of a possible problem with the model (e.g. it wasn’t trained properly).

Using the parameter strategy, you can select the simple model used in the check:

Strategy

Description

prior (default)

The probability vector always contains the empirical class prior distribution (i.e. the class distribution observed in the training set).

most_frequent

The most frequent prediction is predicted. The probability vector is 1 for the most frequent prediction and 0 for the other predictions.

stratified

The predictions are generated by sampling one-hot vectors from a multinomial distribution parametrized by the empirical class prior probabilities.

uniform

Generates predictions uniformly at random from the list of unique classes observed in y, i.e. each class has equal probability.

Similiar to the tabular simple model comparison check, there is no simple model which is more “correct” to use, each gives a different baseline to compare to, and you may experiment with the different types and see how it performs on your data.

This checks applies only to classification datasets.

Generate data and model#

from deepchecks.vision.checks import SimpleModelComparison
from deepchecks.vision.datasets.classification import mnist
mnist_model = mnist.load_model()
train_ds = mnist.load_dataset(train=True, object_type='VisionData')
test_ds = mnist.load_dataset(train=False, object_type='VisionData')

Run the check#

We will run the check with the prior model type. The check will use the default classification metrics - precision and recall. This can be overridden by providing an alternative scorer using the alternative_metrics` parameter.

check = SimpleModelComparison(strategy='stratified')
result = check.run(train_ds, test_ds, mnist_model)
Validating Input:
|     | 0/1 [Time: 00:00]
Validating Input:
|#####| 1/1 [Time: 00:00]
Validating Input:
|#####| 1/1 [Time: 00:00]

Ingesting Batches - Train Dataset:
|                                                                                                                                                             | 0/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|###########################9                                                                                                                                 | 28/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|########################################################9                                                                                                    | 57/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|######################################################################################                                                                       | 86/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|#################################################################################################################9                                           | 114/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|##############################################################################################################################################               | 142/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|#############################################################################################################################################################| 157/157 [Time: 00:00]


Ingesting Batches - Test Dataset:
|          | 0/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|##        | 2/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|####      | 4/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|######    | 6/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|########  | 8/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|##########| 10/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|##########| 10/10 [Time: 00:00]



Computing Check:
|     | 0/1 [Time: 00:00]



Computing Check:
|#####| 1/1 [Time: 00:00]



Computing Check:
|#####| 1/1 [Time: 00:00]
result
Simple Model Comparison


If you have a GPU, you can speed up this check by passing it as an argument to .run() as device=<your GPU>

To display the results in an IDE like PyCharm, you can use the following code:

#  result.show_in_window()

The result will be displayed in a new window.

Observe the check’s output#

We can see in the results that the check calculates the score for each class in the dataset, and compares the scores between our model and the simple model.

In addition to the graphic output, the check also returns a value which includes all of the information that is needed for defining the conditions for validation.

The value is a dataframe that contains the metrics’ values for each class and dataset:

result.value.sort_values(by=['Class', 'Metric']).head(10)
Model Metric Class Class Name Number of samples Value
5 Simple Model F1 0 0 980 0.095626
10 Perfect Model F1 0 0 980 1.000000
24 Given Model F1 0 0 980 0.985844
0 Simple Model F1 1 1 1135 0.109215
11 Perfect Model F1 1 1 1135 1.000000
20 Given Model F1 1 1 1135 0.993843
4 Simple Model F1 2 2 1032 0.101266
12 Perfect Model F1 2 2 1032 1.000000
26 Given Model F1 2 2 1032 0.982237
3 Simple Model F1 3 3 1010 0.105209


Define a condition#

We can define on our check a condition that will validate our model is better than the simple model by a given margin called gain. For classification we check the gain for each class separately and if there is a class that doesn’t pass the defined gain the condition will fail.

The performance gain is the percent of the improved performance out of the “remaining” unattained performance. Its purpose is to reflect the significance of the said improvement. Take for example for a metric between 0 and 1. A change of only 0.03 that takes us from 0.95 to 0.98 is highly significant (especially in an imbalance scenario), but improving from 0.1 to 0.13 is not a great achievement.

The gain is calculated as: gain = \frac{\text{model score} - \text{simple score}}
{\text{perfect score} - \text{simple score}}

Let’s add a condition to the check and see what happens when it fails:

check = SimpleModelComparison(strategy='stratified')
check.add_condition_gain_greater_than(min_allowed_gain=0.99)
result = check.run(train_ds, test_ds, mnist_model)
result
Validating Input:
|     | 0/1 [Time: 00:00]
Validating Input:
|#####| 1/1 [Time: 00:00]
Validating Input:
|#####| 1/1 [Time: 00:00]

Ingesting Batches - Train Dataset:
|                                                                                                                                                             | 0/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|###########################9                                                                                                                                 | 28/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|#######################################################9                                                                                                     | 56/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|####################################################################################                                                                         | 84/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|################################################################################################################9                                            | 113/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|##############################################################################################################################################               | 142/157 [Time: 00:00]

Ingesting Batches - Train Dataset:
|#############################################################################################################################################################| 157/157 [Time: 00:00]


Ingesting Batches - Test Dataset:
|          | 0/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|##        | 2/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|####      | 4/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|######    | 6/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|########  | 8/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|##########| 10/10 [Time: 00:00]


Ingesting Batches - Test Dataset:
|##########| 10/10 [Time: 00:00]



Computing Check:
|     | 0/1 [Time: 00:00]



Computing Check:
|#####| 1/1 [Time: 00:00]



Computing Check:
|#####| 1/1 [Time: 00:00]
Simple Model Comparison


We detected that for several classes our gain did not passed the target gain we defined, therefore it failed.

Total running time of the script: ( 0 minutes 3.651 seconds)

Gallery generated by Sphinx-Gallery