SimpleModelComparison#

class SimpleModelComparison[source]#

Compare given model score to simple model score (according to given model type).

For classification models, the simple model is a dummy classifier the selects the predictions based on a strategy.

Parameters
strategystr, default=’prior’

Strategy to use to generate the predictions of the simple model.

  • ‘most_frequent’ : The most frequent label in the training set is predicted. The probability vector is 1 for the most frequent label and 0 for the other predictions.

  • ‘prior’ : The probability vector always contains the empirical class prior distribution (i.e. the class distribution observed in the training set).

  • ‘stratified’ : The predictions are generated by sampling one-hot vectors from a multinomial distribution parametrized by the empirical class prior probabilities.

  • ‘uniform’ : Generates predictions uniformly at random from the list of unique classes observed in y, i.e. each class has equal probability. The predicted class is chosen randomly.

alternative_metricsDict[str, Metric], default: None

A dictionary of metrics, where the key is the metric name and the value is an ignite.Metric object whose score should be used. If None are given, use the default metrics.

n_to_showint, default: 20

Number of classes to show in the report. If None, show all classes.

show_onlystr, default: ‘largest’

Specify which classes to show in the report. Can be one of the following: - ‘largest’: Show the largest classes. - ‘smallest’: Show the smallest classes. - ‘random’: Show random classes. - ‘best’: Show the classes with the highest score. - ‘worst’: Show the classes with the lowest score.

metric_to_show_bystr, default: None

Specify the metric to sort the results by. Relevant only when show_only is ‘best’ or ‘worst’. If None, sorting by the first metric in the default metrics list.

class_list_to_show: List[int], default: None

Specify the list of classes to show in the report. If specified, n_to_show, show_only and metric_to_show_by are ignored.

__init__(scorers: Optional[Union[Dict[str, Union[Metric, Callable, str]], List[Any]]] = None, strategy: str = 'most_frequent', alternative_metrics=None, n_to_show: int = 20, show_only: str = 'largest', metric_to_show_by: Optional[str] = None, class_list_to_show: Optional[List[int]] = None, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

SimpleModelComparison.add_condition(name, ...)

Add new condition function to the check.

SimpleModelComparison.add_condition_gain_greater_than([...])

Add condition - require gain between the model and the simple model to be greater than threshold.

SimpleModelComparison.clean_conditions()

Remove all conditions from this check instance.

SimpleModelComparison.compute(context)

Compute the metrics for the check.

SimpleModelComparison.conditions_decision(result)

Run conditions on given result.

SimpleModelComparison.config([...])

Return check configuration (conditions' configuration not yet supported).

SimpleModelComparison.from_config(conf[, ...])

Return check object from a CheckConfig object.

SimpleModelComparison.from_json(conf[, ...])

Deserialize check instance from JSON string.

SimpleModelComparison.initialize_run(context)

Initialize the metrics for the check, and validate task type is relevant.

SimpleModelComparison.metadata([with_doc_link])

Return check metadata.

SimpleModelComparison.name()

Name of class in split camel case.

SimpleModelComparison.params([show_defaults])

Return parameters to show when printing the check.

SimpleModelComparison.remove_condition(index)

Remove given condition by index.

SimpleModelComparison.run(train_dataset, ...)

Run check.

SimpleModelComparison.to_json([indent, ...])

Serialize check instance to JSON string.

SimpleModelComparison.update(context, batch, ...)

Update the metrics for the check.

Examples#