LabelDrift#

class LabelDrift[source]#

Calculate label drift between train dataset and test dataset, using statistical measures.

Check calculates a drift score for the label in test dataset, by comparing its distribution to the train dataset. As the label may be complex, we calculate different properties of the label and check their distribution.

A label property is any function that gets labels and returns list of values. each value represents a property on the label, such as number of objects in image or tilt of each bounding box in image.

There are default properties per task: For classification: - distribution of classes

For object detection: - distribution of classes - distribution of bounding box areas - distribution of number of bounding boxes per image

For numerical distributions, we use the Kolmogorov-Smirnov statistic. See https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test We also support Earth Mover’s Distance (EMD). See https://en.wikipedia.org/wiki/Wasserstein_metric

For categorical distributions, we use the Cramer’s V. See https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_V We also support Population Stability Index (PSI). See https://www.lexjansen.com/wuss/2017/47_Final_Paper_PDF.pdf.

Note: In case of highly imbalanced classes, it is recommended to use Cramer’s V, together with setting the balance_classes parameter to True.

Parameters
label_propertiesList[Dict[str, Any]], default: None

List of properties. Replaces the default deepchecks properties. Each property is a dictionary with keys 'name' (str), method (Callable) and 'output_type' (str), representing attributes of said method. ‘output_type’ must be one of:

  • 'numerical' - for continuous ordinal outputs.

  • 'categorical' - for discrete, non-ordinal outputs. These can still be numbers, but these numbers do not have inherent value.

  • 'class_id' - for properties that return the class_id. This is used because these properties are later matched with the VisionData.label_map, if one was given.

For more on image / label properties, see the guide about Vision Properties.

margin_quantile_filterfloat, default: 0.025

float in range [0,0.5), representing which margins (high and low quantiles) of the distribution will be filtered out of the EMD calculation. This is done in order for extreme values not to affect the calculation disproportionally. This filter is applied to both distributions, in both margins.

min_category_size_ratiofloat, default 0.01

minimum size ratio for categories. Categories with size ratio lower than this number are binned into an “Other” category. Ignored if balance_classes=True.

max_num_categories_for_driftint, default: None

Only for discrete properties. Max number of allowed categories. If there are more, they are binned into an “Other” category.

max_num_categories_for_display: int, default: 10

Max number of categories to show in plot.

show_categories_bystr, default: ‘largest_difference’

Specify which categories to show for categorical features’ graphs, as the number of shown categories is limited by max_num_categories_for_display. Possible values: - ‘train_largest’: Show the largest train categories. - ‘test_largest’: Show the largest test categories. - ‘largest_difference’: Show the largest difference between categories.

numerical_drift_method: str, default: “KS”

decides which method to use on numerical variables. Possible values are: “EMD” for Earth Mover’s Distance (EMD), “KS” for Kolmogorov-Smirnov (KS).

categorical_drift_methodstr, default: “cramers_v”

decides which method to use on categorical variables. Possible values are: “cramers_v” for Cramer’s V, “PSI” for Population Stability Index (PSI).

balance_classes: bool, default: False

If True, all categories will have an equal weight in the Cramer’s V score. This is useful when the categorical variable is highly imbalanced, and we want to be alerted on changes in proportion to the category size, and not only to the entire dataset. Must have categorical_drift_method = “cramers_v”. If True, the variable frequency plot will be created with a log scale in the y-axis.

aggregation_method: str, default: None

argument for the reduce_output functionality, decides how to aggregate the individual properties drift scores for a collective score between 0 and 1. Possible values are: ‘mean’: Mean of all properties scores. ‘none’: No averaging. Return a dict with a drift score for each property. ‘max’: Maximum of all the properties drift scores.

min_samplesint , default: 10

Minimum number of samples required to calculate the drift score. If there are not enough samples for either train or test, the check will return None for that property. If there are not enough samples for all properties, the check will raise a NotEnoughSamplesError exception.

n_samplesOptional[int] , default10000

Number of samples to use for the check. If None, all samples will be used.

__init__(label_properties: Optional[List[Dict[str, Any]]] = None, margin_quantile_filter: float = 0.025, max_num_categories_for_drift: Optional[int] = None, min_category_size_ratio: float = 0.01, max_num_categories_for_display: int = 10, show_categories_by: str = 'largest_difference', numerical_drift_method: str = 'KS', categorical_drift_method: str = 'cramers_v', balance_classes: bool = False, aggregation_method: Optional[str] = None, min_samples: Optional[int] = 10, n_samples: Optional[int] = 10000, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

LabelDrift.add_condition(name, ...)

Add new condition function to the check.

LabelDrift.add_condition_drift_score_less_than([...])

Add condition - require label properties drift score to be less than a certain threshold.

LabelDrift.clean_conditions()

Remove all conditions from this check instance.

LabelDrift.compute(context)

Calculate drift on label properties samples that were collected during update() calls.

LabelDrift.conditions_decision(result)

Run conditions on given result.

LabelDrift.config([include_version, ...])

Return check configuration.

LabelDrift.from_config(conf[, version_unmatch])

Return check object from a CheckConfig object.

LabelDrift.from_json(conf[, version_unmatch])

Deserialize check instance from JSON string.

LabelDrift.greater_is_better()

Return True if the check reduce_output is better when it is greater.

LabelDrift.initialize_run(context)

Initialize run.

LabelDrift.metadata([with_doc_link])

Return check metadata.

LabelDrift.name()

Name of class in split camel case.

LabelDrift.params([show_defaults])

Return parameters to show when printing the check.

LabelDrift.property_reduce(...)

Return an aggregated drift score based on aggregation method defined.

LabelDrift.reduce_output(check_result)

Return label drift score per label property.

LabelDrift.remove_condition(index)

Remove given condition by index.

LabelDrift.run(train_dataset, test_dataset)

Run check.

LabelDrift.to_json([indent, ...])

Serialize check instance to JSON string.

LabelDrift.update(context, batch, dataset_kind)

Perform update on batch for train or test properties.

Examples#