PercentOfNulls#

class PercentOfNulls[source]#

Percent of ‘Null’ values in each column.

Parameters
columnsUnion[Hashable, List[Hashable]] , default: None

List of columns to check, if none given checks all columns Except ignored ones.

ignore_columnsUnion[Hashable, List[Hashable]] , default: None

List of columns to ignore, if none given checks based on columns variable.

max_features_to_showint , default: 5

maximum features with to show, showing top features based on percent of nulls.

aggregation_method: t.Optional[str], default: ‘max’

Argument for the reduce_output functionality, decides how to aggregate the vector of per-feature scores into a single aggregated score. The aggregated score value is between 0 and 1 for all methods. Possible values are: ‘l3_weighted’: Default. L3 norm over the ‘per-feature scores’ vector weighted by the feature importance, specifically, sum(FI * PER_FEATURE_SCORES^3)^(1/3). This method takes into account the feature importance yet puts more weight on the per-feature scores. This method is recommended for most cases. ‘l5_weighted’: Similar to ‘l3_weighted’, but with L5 norm. Puts even more emphasis on the per-feature scores and specifically on the largest per-feature scores returning a score closer to the maximum among the per-feature scores. ‘weighted’: Weighted mean of per-feature scores based on feature importance. ‘max’: Maximum of all the per-feature scores. None: No averaging. Return a dict with a per-feature score for each feature.

n_samplesint , default: 100_000

number of samples to use for this check.

random_stateint, default: 42

random seed for all check internals.

__init__(columns: Optional[Union[Hashable, List[Hashable]]] = None, ignore_columns: Optional[Union[Hashable, List[Hashable]]] = None, max_features_to_show: int = 5, aggregation_method: Optional[str] = 'max', n_samples: int = 100000, random_state: int = 42, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

PercentOfNulls.add_condition(name, ...)

Add new condition function to the check.

PercentOfNulls.add_condition_percent_of_nulls_not_greater_than([...])

Add condition - percent of null values in each column is not greater than the threshold.

PercentOfNulls.clean_conditions()

Remove all conditions from this check instance.

PercentOfNulls.conditions_decision(result)

Run conditions on given result.

PercentOfNulls.config([include_version, ...])

Return check configuration (conditions' configuration not yet supported).

PercentOfNulls.feature_reduce(...)

Return an aggregated drift score based on aggregation method defined.

PercentOfNulls.from_config(conf[, ...])

Return check object from a CheckConfig object.

PercentOfNulls.from_json(conf[, version_unmatch])

Deserialize check instance from JSON string.

PercentOfNulls.greater_is_better()

Return True if the check reduce_output is better when it is greater.

PercentOfNulls.metadata([with_doc_link])

Return check metadata.

PercentOfNulls.name()

Name of class in split camel case.

PercentOfNulls.params([show_defaults])

Return parameters to show when printing the check.

PercentOfNulls.reduce_output(check_result)

Return an aggregated drift score based on aggregation method defined.

PercentOfNulls.remove_condition(index)

Remove given condition by index.

PercentOfNulls.run(dataset[, model, ...])

Run check.

PercentOfNulls.run_logic(context, dataset_kind)

Run check logic.

PercentOfNulls.to_json([indent, ...])

Serialize check instance to JSON string.

Examples#