PerformanceBias#

class PerformanceBias[source]#

Check for performance differences between subgroups of a feature, optionally accounting for a control variable.

The check identifies ‘performance biases’: large performance difference for a subgroup compared to the baseline performance for the full population. It is intended to be used for fairness analyses.

Subgroups are defined by the categories of a ‘protected’ feature. Numerical features are first binned into quantiles, while categorical features are preserved as-is. The baseline score is the overall score when all subgroups are combined. You can add conditions to flag performance differences outside of given bounds.

Additionally, the analysis may be separated across the categories of a ‘control’ feature. Numerical features are binned and categorical features are re-binned into max_number categories. To account for the control feature, baseline scores and subgroup scores are be computed within each of its categories.

Parameters
protected_featureHashable

Feature evaluated for differences in performance. Numerical features are binned into max_categories quantiles. Categorical features are not transformed.

control_featureHashable, default: None

Feature used to group data prior to evaluating performance differences (differences are only assessed within the groups defined by this feature). Numerical features are binned into max_categories quantiles. Categorical features are re-grouped into at most max_categories groups if necessary.

scorerstr or Tuple[str, Union[str, Callable]], default: None

Name of the performance score function to use.

max_binsint, default: 10

Maximum number of categories into which control_feature is binned.

min_subgroup_sizeint, default: 5

Minimum size of a subgroup for which to compute a performance score.

max_subgroups_per_control_cat_to_displayint, default: 3

Maximum number of subgroups to display.

max_control_cat_to_display: int, default: 3

Maximum number of control_feature categories to display.

n_samplesint, default: 1_000_000

Number of samples from the dataset to use.

random_stateint, default: 42

Random state to use for probability sampling.

__init__(protected_feature: Hashable, control_feature: Optional[Hashable] = None, scorer: Optional[Union[str, Tuple[str, Union[str, Callable]]]] = None, max_bins: int = 10, min_subgroup_size: int = 5, max_subgroups_per_control_cat_to_display: int = 3, max_control_cat_to_display: int = 3, n_samples: int = 1000000, random_state: int = 42, **kwargs)[source]#
__new__(*args, **kwargs)#

Methods

PerformanceBias.add_condition(name, ...)

Add new condition function to the check.

PerformanceBias.add_condition_bounded_performance_difference(...)

Add condition - require performance difference to be between the given bounds.

PerformanceBias.add_condition_bounded_relative_performance_difference(...)

Add condition - require relative performance difference to be between the given bounds.

PerformanceBias.clean_conditions()

Remove all conditions from this check instance.

PerformanceBias.conditions_decision(result)

Run conditions on given result.

PerformanceBias.config([include_version, ...])

Return check configuration (conditions' configuration not yet supported).

PerformanceBias.from_config(conf[, ...])

Return check object from a CheckConfig object.

PerformanceBias.from_json(conf[, ...])

Deserialize check instance from JSON string.

PerformanceBias.metadata([with_doc_link])

Return check metadata.

PerformanceBias.name()

Name of class in split camel case.

PerformanceBias.params([show_defaults])

Return parameters to show when printing the check.

PerformanceBias.remove_condition(index)

Remove given condition by index.

PerformanceBias.run(dataset[, model, ...])

Run check.

PerformanceBias.run_logic(context, dataset_kind)

Run the check logic.

PerformanceBias.to_json([indent, ...])

Serialize check instance to JSON string.

PerformanceBias.validate_attributes()

Validate attributes passed to the check.

Examples#