StringMismatch#
- class StringMismatch[source]#
Detect different variants of string categories (e.g. “mislabeled” vs “mis-labeled”) in a categorical column.
This check tests all the categorical columns within a dataset and search for variants of similar strings. Specifically, we define similarity between strings if they are equal when ignoring case and non-letter characters. Example: We have a column with similar strings ‘OK’ and ‘ok.’ which are variants of the same category. Knowing they both exist we can fix our data so it will have only one category.
- Parameters
- columnsUnion[Hashable, List[Hashable]] , default: None
Columns to check, if none are given checks all columns except ignored ones.
- ignore_columnsUnion[Hashable, List[Hashable]] , default: None
Columns to ignore, if none given checks based on columns variable
- n_top_columnsint , optional
amount of columns to show ordered by feature importance (date, index, label are first)
- aggregation_method: t.Optional[str], default: ‘max’
Argument for the reduce_output functionality, decides how to aggregate the vector of per-feature scores into a single aggregated score. The aggregated score value is between 0 and 1 for all methods. Possible values are: ‘l3_weighted’: Default. L3 norm over the ‘per-feature scores’ vector weighted by the feature importance, specifically, sum(FI * PER_FEATURE_SCORES^3)^(1/3). This method takes into account the feature importance yet puts more weight on the per-feature scores. This method is recommended for most cases. ‘l5_weighted’: Similar to ‘l3_weighted’, but with L5 norm. Puts even more emphasis on the per-feature scores and specifically on the largest per-feature scores returning a score closer to the maximum among the per-feature scores. ‘weighted’: Weighted mean of per-feature scores based on feature importance. ‘max’: Maximum of all the per-feature scores. None: No averaging. Return a dict with a per-feature score for each feature.
- n_samplesint , default: 1_000_000
number of samples to use for this check.
- random_stateint, default: 42
random seed for all check internals.
- __init__(columns: Optional[Union[Hashable, List[Hashable]]] = None, ignore_columns: Optional[Union[Hashable, List[Hashable]]] = None, n_top_columns: int = 10, aggregation_method: Optional[str] = 'max', n_samples: int = 1000000, random_state: int = 42, **kwargs)[source]#
- __new__(*args, **kwargs)#
Methods
|
Add new condition function to the check. |
Add condition - no variants are allowed. |
|
|
Add condition - number of variants (per string baseform) is less or equal to threshold. |
|
Add condition - percentage of variants in data is less or equal to threshold. |
Remove all conditions from this check instance. |
|
Run conditions on given result. |
|
|
Return check configuration (conditions' configuration not yet supported). |
Return an aggregated drift score based on aggregation method defined. |
|
|
Return check object from a CheckConfig object. |
|
Deserialize check instance from JSON string. |
Return True if the check reduce_output is better when it is greater. |
|
|
Return check metadata. |
Name of class in split camel case. |
|
|
Return parameters to show when printing the check. |
|
Return an aggregated drift score based on aggregation method defined. |
Remove given condition by index. |
|
|
Run check. |
|
Run check. |
|
Serialize check instance to JSON string. |