DataDuplicates#

class DataDuplicates[source]#

Checks for duplicate samples in the dataset.

Parameters
columnsUnion[Hashable, List[Hashable]] , default: None

List of columns to check, if none given checks all columns Except ignored ones.

ignore_columnsUnion[Hashable, List[Hashable]] , default: None

List of columns to ignore, if none given checks based on columns variable.

n_to_showint , default: 5

number of most common duplicated samples to show.

n_samplesint , default: 10_000_000

number of samples to use for this check.

random_stateint, default: 42

random seed for all check internals.

__init__(columns: Optional[Union[Hashable, List[Hashable]]] = None, ignore_columns: Optional[Union[Hashable, List[Hashable]]] = None, n_to_show: int = 5, n_samples: int = 10000000, random_state: int = 42, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

DataDuplicates.add_condition(name, ...)

Add new condition function to the check.

DataDuplicates.add_condition_ratio_less_or_equal([...])

Add condition - require duplicate ratio to be less or equal to max_ratio.

DataDuplicates.clean_conditions()

Remove all conditions from this check instance.

DataDuplicates.conditions_decision(result)

Run conditions on given result.

DataDuplicates.config([include_version])

Return check configuration (conditions' configuration not yet supported).

DataDuplicates.from_config(conf[, ...])

Return check object from a CheckConfig object.

DataDuplicates.from_json(conf[, version_unmatch])

Deserialize check instance from JSON string.

DataDuplicates.metadata([with_doc_link])

Return check metadata.

DataDuplicates.name()

Name of class in split camel case.

DataDuplicates.params([show_defaults])

Return parameters to show when printing the check.

DataDuplicates.remove_condition(index)

Remove given condition by index.

DataDuplicates.run(dataset[, model, ...])

Run check.

DataDuplicates.run_logic(context, dataset_kind)

Run check.

DataDuplicates.to_json([indent])

Serialize check instance to JSON string.

Examples#