MultiModelPerformanceReport#

class MultiModelPerformanceReport[source]#

Summarize performance scores for multiple models on test datasets.

Parameters
scorers: Union[Mapping[str, Union[str, Callable]], List[str]], default: None

Scorers to override the default scorers, find more about the supported formats at https://docs.deepchecks.com/stable/user-guide/general/metrics_guide.html

alternative_scorersDict[str, Callable] , default: None

Deprecated, please use scorers instead.

n_samplesint , default: 1_000_000

number of samples to use for this check.

random_stateint, default: 42

random seed for all check internals.

__init__(scorers: Optional[Union[Mapping[str, Union[str, Callable]], List[str]]] = None, alternative_scorers: Optional[Dict[str, Callable]] = None, n_samples: int = 1000000, random_state: int = 42, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

MultiModelPerformanceReport.add_condition(...)

Add new condition function to the check.

MultiModelPerformanceReport.clean_conditions()

Remove all conditions from this check instance.

MultiModelPerformanceReport.conditions_decision(result)

Run conditions on given result.

MultiModelPerformanceReport.config([...])

Return check instance config.

MultiModelPerformanceReport.from_config(conf)

Return check object from a CheckConfig object.

MultiModelPerformanceReport.from_json(conf)

Deserialize check instance from JSON string.

MultiModelPerformanceReport.metadata([...])

Return check metadata.

MultiModelPerformanceReport.name()

Name of class in split camel case.

MultiModelPerformanceReport.params([...])

Return parameters to show when printing the check.

MultiModelPerformanceReport.remove_condition(index)

Remove given condition by index.

MultiModelPerformanceReport.run(...)

Initialize context and pass to check logic.

MultiModelPerformanceReport.run_logic(...)

Run check logic.

MultiModelPerformanceReport.to_json([indent])

Serialize check instance to JSON string.

Examples#