ConflictingLabels#

class ConflictingLabels[source]#

Find samples which have the exact same features’ values but different labels.

Parameters
columnsUnion[Hashable, List[Hashable]] , default: None

List of columns to check, if none given checks all columns Except ignored ones.

ignore_columnsUnion[Hashable, List[Hashable]] , default: None

List of columns to ignore, if none given checks based on columns variable.

n_to_showint , default: 5

number of most common ambiguous samples to show.

__init__(columns: Optional[Union[Hashable, List[Hashable]]] = None, ignore_columns: Optional[Union[Hashable, List[Hashable]]] = None, n_to_show: int = 5, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

ConflictingLabels.add_condition(name, ...)

Add new condition function to the check.

ConflictingLabels.add_condition_ratio_of_conflicting_labels_less_or_equal([...])

Add condition - require ratio of samples with conflicting labels less or equal to max_ratio.

ConflictingLabels.clean_conditions()

Remove all conditions from this check instance.

ConflictingLabels.conditions_decision(result)

Run conditions on given result.

ConflictingLabels.config()

Return check configuration (conditions' configuration not yet supported).

ConflictingLabels.from_config(conf)

Return check object from a CheckConfig object.

ConflictingLabels.metadata([with_doc_link])

Return check metadata.

ConflictingLabels.name()

Name of class in split camel case.

ConflictingLabels.params([show_defaults])

Return parameters to show when printing the check.

ConflictingLabels.remove_condition(index)

Remove given condition by index.

ConflictingLabels.run(dataset[, model, ...])

Run check.

ConflictingLabels.run_logic(context, ...)

Run check.

Examples#