data_integrity#

data_integrity(image_properties: Optional[List[Dict[str, Any]]] = None, n_show_top: int = 5, label_properties: Optional[List[Dict[str, Any]]] = None, **kwargs) Suite[source]#

Create a suite that includes integrity checks.

List of Checks:
List of Checks#

Check Example

API Reference

Image Property Outliers

ImagePropertyOutliers

Label Property Outliers

LabelPropertyOutliers

Parameters
image_propertiesList[Dict[str, Any]], default: None

List of properties. Replaces the default deepchecks properties. Each property is a dictionary with keys ‘name’ (str), ‘method’ (Callable) and ‘output_type’ (str), representing attributes of said method. ‘output_type’ must be one of: - ‘numeric’ - for continuous ordinal outputs. - ‘categorical’ - for discrete, non-ordinal outputs. These can still be numbers,

but these numbers do not have inherent value.

For more on image / label properties, see the vision_properties_guide

n_show_topint , default: 5

number of samples to show from each direction (upper limit and bottom limit)

label_propertiesList[Dict[str, Any]], default: None

List of properties. Replaces the default deepchecks properties. Each property is a dictionary with keys ‘name’ (str), ‘method’ (Callable) and ‘output_type’ (str), representing attributes of said method. ‘output_type’ must be one of: - ‘numeric’ - for continuous ordinal outputs. - ‘categorical’ - for discrete, non-ordinal outputs. These can still be numbers,

but these numbers do not have inherent value.

For more on image / label properties, see the vision_properties_guide

**kwargsdict

additional arguments to pass to the checks.

Returns
Suite

A suite that includes integrity checks.

Examples

>>> from deepchecks.vision.suites import data_integrity
>>> suite = data_integrity()
>>> result = suite.run()
>>> result.show()
run(self, train_dataset: Optional[VisionData] = None, test_dataset: Optional[VisionData] = None, model: Optional[Module] = None, scorers: Optional[Mapping[str, Metric]] = None, scorers_per_class: Optional[Mapping[str, Metric]] = None, device: Optional[Union[str, device]] = None, random_state: int = 42, with_display: bool = True, n_samples: Optional[int] = None, train_predictions: Optional[Dict[int, Union[Sequence[Tensor], Tensor]]] = None, test_predictions: Optional[Dict[int, Union[Sequence[Tensor], Tensor]]] = None, model_name: str = '') SuiteResult#

Run all checks.

Parameters
train_dataset: Optional[VisionData] , default None

object, representing data an estimator was fitted on

test_datasetOptional[VisionData] , default None

object, representing data an estimator predicts on

modelnn.Module , default None

A scikit-learn-compatible fitted estimator instance

model_name: str , default: ‘’

The name of the model

scorersOptional[Mapping[str, Metric]] , default: None

dict of scorers names to a Metric

scorers_per_classOptional[Mapping[str, Metric]] , default: None

dict of scorers for classification without averaging of the classes. See <a href= “https://scikit-learn.org/stable/modules/model_evaluation.html#from-binary-to-multiclass-and-multilabel”> scikit-learn docs</a>

deviceUnion[str, torch.device], default: ‘cpu’

processing unit for use

random_stateint

A seed to set for pseudo-random functions

n_samplesOptional[int], default: None

number of samples

with_displaybool , default: True

flag that determines if checks will calculate display (redundant in some checks).

train_predictions: Optional[Dict[int, Union[Sequence[torch.Tensor], torch.Tensor]]] , default None

Dictionary of the model prediction over the train dataset (keys are the indexes).

test_predictions: Optional[Dict[int, Union[Sequence[torch.Tensor], torch.Tensor]]] , default None

Dictionary of the model prediction over the test dataset (keys are the indexes).

Returns
SuiteResult

All results by all initialized checks