DatasetsSizeComparison#

class DatasetsSizeComparison[source]#

Verify test dataset size comparing it to the train dataset size.

__init__(**kwargs)[source]#
__new__(*args, **kwargs)#

Methods

DatasetsSizeComparison.add_condition(name, ...)

Add new condition function to the check.

DatasetsSizeComparison.add_condition_test_size_greater_or_equal([value])

Add condition verifying that size of the test dataset is greater or equal to threshold.

DatasetsSizeComparison.add_condition_test_train_size_ratio_greater_than([ratio])

Add condition verifying that test-train size ratio is greater than threshold.

DatasetsSizeComparison.add_condition_train_dataset_greater_or_equal_test()

Add condition verifying that train dataset is greater than test dataset.

DatasetsSizeComparison.clean_conditions()

Remove all conditions from this check instance.

DatasetsSizeComparison.conditions_decision(result)

Run conditions on given result.

DatasetsSizeComparison.config([...])

Return check configuration (conditions' configuration not yet supported).

DatasetsSizeComparison.from_config(conf[, ...])

Return check object from a CheckConfig object.

DatasetsSizeComparison.from_json(conf[, ...])

Deserialize check instance from JSON string.

DatasetsSizeComparison.metadata([with_doc_link])

Return check metadata.

DatasetsSizeComparison.name()

Name of class in split camel case.

DatasetsSizeComparison.params([show_defaults])

Return parameters to show when printing the check.

DatasetsSizeComparison.remove_condition(index)

Remove given condition by index.

DatasetsSizeComparison.run(train_dataset, ...)

Run check.

DatasetsSizeComparison.run_logic(context)

Run check.

DatasetsSizeComparison.to_json([indent, ...])

Serialize check instance to JSON string.

Examples#