SingleDatasetPerformance#
- class SingleDatasetPerformance[source]#
Calculate performance metrics of a given model on a given dataset.
- Parameters
- scorers: Union[Dict[str, Union[Callable, str]], List[Any]] , default: None
Scorers to override the default scorers (metrics), find more about the supported formats at https://docs.deepchecks.com/stable/user-guide/general/metrics_guide.html
- n_samplesOptional[int] , default10000
Number of samples to use for the check. If None, all samples will be used.
- __init__(scorers: Optional[Union[Dict[str, Union[Callable, str]], List[Any]]] = None, n_samples: Optional[int] = 10000, **kwargs)[source]#
- __new__(*args, **kwargs)#
Methods
|
Add new condition function to the check. |
Add condition - the result is greater than the threshold. |
|
Add condition - the result is less than the threshold. |
|
Remove all conditions from this check instance. |
|
|
Compute the metric result using the ignite metrics compute method and reduce to a scalar. |
Run conditions on given result. |
|
Return check configuration. |
|
|
Return check object from a CheckConfig object. |
|
Deserialize check instance from JSON string. |
Return True if the check reduce_output is better when it is greater. |
|
Initialize the metric for the check, and validate task type is relevant. |
|
Return check metadata. |
|
Name of class in split camel case. |
|
|
Return parameters to show when printing the check. |
Return the values of the metrics for the dataset provided in a {metric: value} format. |
|
Remove given condition by index. |
|
|
Run check. |
|
Serialize check instance to JSON string. |
|
Update the metrics by passing the batch to ignite metric update method. |