NewLabelTrainTest#

class NewLabelTrainTest[source]#

Find new labels in test.

Parameters
n_samplesint , default: 10_000_000

number of samples to use for this check.

random_stateint, default: 42

random seed for all check internals.

__init__(n_samples: int = 10000000, random_state: int = 42, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

NewLabelTrainTest.add_condition(name, ...)

Add new condition function to the check.

NewLabelTrainTest.add_condition_new_label_ratio_less_or_equal([...])

Add condition - require label column's ratio of new label samples to be less or equal to the threshold.

NewLabelTrainTest.add_condition_new_labels_number_less_or_equal([...])

Add condition - require label column's number of different new labels to be less or equal to the threshold.

NewLabelTrainTest.clean_conditions()

Remove all conditions from this check instance.

NewLabelTrainTest.conditions_decision(result)

Run conditions on given result.

NewLabelTrainTest.config([include_version, ...])

Return check configuration (conditions' configuration not yet supported).

NewLabelTrainTest.from_config(conf[, ...])

Return check object from a CheckConfig object.

NewLabelTrainTest.from_json(conf[, ...])

Deserialize check instance from JSON string.

NewLabelTrainTest.greater_is_better()

Return True if the check reduce_output is better when it is greater.

NewLabelTrainTest.metadata([with_doc_link])

Return check metadata.

NewLabelTrainTest.name()

Name of class in split camel case.

NewLabelTrainTest.params([show_defaults])

Return parameters to show when printing the check.

NewLabelTrainTest.reduce_output(check_result)

Reduce check result value.

NewLabelTrainTest.remove_condition(index)

Remove given condition by index.

NewLabelTrainTest.run(train_dataset, ...[, ...])

Run check.

NewLabelTrainTest.run_logic(context)

Run check.

NewLabelTrainTest.to_json([indent, ...])

Serialize check instance to JSON string.

Examples#