MultiModelPerformanceReport#

class MultiModelPerformanceReport[source]#

Summarize performance scores for multiple models on test datasets.

Parameters
alternative_scorersDict[str, Callable] , default: None

An optional dictionary of scorer name to scorer functions. If none given, using default scorers

n_samplesint , default: 1_000_000

number of samples to use for this check.

random_stateint, default: 42

random seed for all check internals.

__init__(alternative_scorers: Optional[Dict[str, Callable]] = None, n_samples: int = 1000000, random_state: int = 42, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

MultiModelPerformanceReport.add_condition(...)

Add new condition function to the check.

MultiModelPerformanceReport.clean_conditions()

Remove all conditions from this check instance.

MultiModelPerformanceReport.conditions_decision(result)

Run conditions on given result.

MultiModelPerformanceReport.config([...])

Return check instance config.

MultiModelPerformanceReport.from_config(conf)

Return check object from a CheckConfig object.

MultiModelPerformanceReport.from_json(conf)

Deserialize check instance from JSON string.

MultiModelPerformanceReport.metadata([...])

Return check metadata.

MultiModelPerformanceReport.name()

Name of class in split camel case.

MultiModelPerformanceReport.params([...])

Return parameters to show when printing the check.

MultiModelPerformanceReport.remove_condition(index)

Remove given condition by index.

MultiModelPerformanceReport.run(...)

Initialize context and pass to check logic.

MultiModelPerformanceReport.run_logic(...)

Run check logic.

MultiModelPerformanceReport.to_json([indent])

Serialize check instance to JSON string.

Examples#