File size: 24,126 Bytes
fc0f7bd | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting and preparing the data\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To demonstrate the post processing algorithm we use the \"COMPAS\" dataset from [ProPublica](https://raw.githubusercontent.com/propublica/compas-analysis/master/compas-scores-two-years.csv). The labels represent the two-year recidivism ID, i.e. whether a person got rearrested within two years (label 1) or not (label 0). The features include sex, age, as well as information on prior incidents.\n",
"\n",
"To start, let's download the dataset using `tempeh`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"from tempeh.configurations import datasets\n",
"\n",
"compas_dataset = datasets['compas']()\n",
"X_train, X_test = compas_dataset.get_X(format=pd.DataFrame)\n",
"y_train, y_test = compas_dataset.get_y(format=pd.Series)\n",
"sensitive_features_train, sensitive_features_test = compas_dataset.get_sensitive_features('race', format=pd.Series)\n",
"X_train.loc[0], y_train[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create a fairness-unaware model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we set up a helper function that will help in analyzing the dataset as well as predictions from our models. Feel free to skip to the next cell for the actual logic."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.cm as cm\n",
"\n",
"# show_proportions is only a helper function for plotting\n",
"def show_proportions(X, sensitive_features, y_pred, y=None, description=None, plot_row_index=1):\n",
" print(\"\\n\" + description)\n",
" plt.figure(plot_row_index)\n",
" plt.title(description)\n",
" plt.ylabel(\"P[recidivism predicted | conditions]\")\n",
" \n",
" indices = {}\n",
" positive_indices = {}\n",
" negative_indices = {}\n",
" recidivism_count = {}\n",
" recidivism_pct = {}\n",
" groups = np.unique(sensitive_features.values)\n",
" n_groups = len(groups)\n",
" max_group_length = max([len(group) for group in groups])\n",
" color = cm.rainbow(np.linspace(0,1,n_groups))\n",
" x_tick_labels_basic = []\n",
" x_tick_labels_by_label = []\n",
" for index, group in enumerate(groups):\n",
" indices[group] = sensitive_features.index[sensitive_features == group]\n",
" recidivism_count[group] = sum(y_pred[indices[group]])\n",
" recidivism_pct[group] = recidivism_count[group]/len(indices[group])\n",
" print(\"P[recidivism predicted | {}] {}= {}\".format(group, \" \"*(max_group_length-len(group)), recidivism_pct[group]))\n",
" \n",
" plt.bar(index + 1, recidivism_pct[group], color=color[index])\n",
" x_tick_labels_basic.append(group)\n",
" \n",
" if y is not None:\n",
" positive_indices[group] = sensitive_features.index[(sensitive_features == group) & (y == 1)]\n",
" negative_indices[group] = sensitive_features.index[(sensitive_features == group) & (y == 0)]\n",
" prob_1 = sum(y_pred[positive_indices[group]])/len(positive_indices[group])\n",
" prob_0 = sum(y_pred[negative_indices[group]])/len(negative_indices[group])\n",
" print(\"P[recidivism predicted | {}, recidivism] {}= {}\".format(group, \" \"*(max_group_length-len(group)) , prob_1))\n",
" print(\"P[recidivism predicted | {}, no recidivism] {}= {}\".format(group, \" \"*(max_group_length-len(group)), prob_0))\n",
"\n",
" plt.bar(n_groups + 1 + 2 * index, prob_1, color=color[index])\n",
" plt.bar(n_groups + 2 + 2 * index, prob_0, color=color[index])\n",
" x_tick_labels_by_label.extend([\"{} recidivism\".format(group), \"{} no recidivism\".format(group)])\n",
" \n",
" x_tick_labels = x_tick_labels_basic + x_tick_labels_by_label\n",
" plt.xticks(range(1, len(x_tick_labels)+1), x_tick_labels, rotation=45, horizontalalignment=\"right\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To get started we look at a very basic Logistic Regression model. We fit it to the training data and plot some characteristics of training and test data as well as the predictions of the model on those datasets.\n",
"\n",
"We notice a stark contrast in the predictions with African-Americans being a lot more likely to be predicted to reoffend, similar to the original training data. However, there's even a disparity between the subgroup of African-Americans and Caucasians with recidivism. When considering only the samples labeled with \"no recidivism\" African-Americans are much more likely to be predicted to reoffend than Caucasians. The test data shows a similar disparity."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"from sklearn.linear_model import LogisticRegression\n",
"\n",
"unconstrained_predictor = LogisticRegression(solver='liblinear')\n",
"unconstrained_predictor.fit(X_train, y_train)\n",
" \n",
"# print and plot data from training and test set as well as predictions with fairness-unaware classifier on both sets \n",
"# show only test data related plots by default - uncomment the next two lines to see training data plots as well\n",
"# show_proportions(X_train, sensitive_features_train, y_train, description=\"original training data:\", plot_row_index=1)\n",
"# show_proportions(X_train, sensitive_features_train, unconstrained_predictor.predict(X_train), y_train, description=\"fairness-unaware prediction on training data:\", plot_row_index=2)\n",
"show_proportions(X_test, sensitive_features_test, y_test, description=\"original test data:\", plot_row_index=3)\n",
"show_proportions(X_test, sensitive_features_test, unconstrained_predictor.predict(X_test), y_test, description=\"fairness-unaware prediction on test data:\", plot_row_index=4)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Postprocessing the model to get a fair model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The idea behind postprocessing is to alter the output of the fairness-unaware model to achieve fairness. The postprocessing algorithm requires three input arguments:\n",
"- the matrix of samples X\n",
"- the vector of predictions y from the fairness-unaware model \n",
"- the vector of group attribute values A (in the code we refer to it as `sensitive_features`)\n",
"\n",
"The goal is to make the output fair with respect to constraints. The postprocessing algorithm uses one of\n",
"- Demographic Parity (DP): $P\\ [\\ h(X)=\\hat{y}\\ |\\ A=a] = P\\ [\\ h(X)=\\hat{y}\\ ] \\qquad \\forall a, \\hat{y}$\n",
"- Equalized Odds (EO): $P\\ [\\ h(X)=\\hat{y}\\ |\\ A=a, Y=y] = P\\ [\\ h(X)=\\hat{y}\\ |\\ Y=y\\ ] \\qquad \\forall a, \\hat{y}$\n",
"\n",
"where $h(X)$ is the prediction based on the input $X$, $\\hat{y}$ and $y$ are labels, and $a$ is a sensitive feature value. In this example, we'd expect the postprocessed model with DP to be balanced between races. In this particular scenario it makes more sense to aim at fairness through accuracy like EO. EO does not make the same guarantees. Instead, it ensures parity between the subgroups of each race with label 1 in the training set, and parity between the subgroups of each race with label 0 in the training set. Applied to this scenario, this means that the subgroups of each race who reoffended in the past are equally likely to be predicted to reoffend (and therefore also equally likely not to). Similarly, there is parity between subgroups of each race without recidivism, but we have no parity between the groups with different training labels. In mathematical terms at the example of African-American and Caucasian:\n",
"\n",
"$$\n",
"P\\ [\\ \\text{recidivism predicted}\\ |\\ \\text{African-American, recidivism}\\ ] = P\\ [\\ \\text{recidivism predicted}\\ |\\ \\text{Caucasian, recidivism}\\ ], \\text{e.g. } 0.95\\\\\n",
"P\\ [\\ \\text{recidivism predicted}\\ |\\ \\text{African-American, no recidivism}\\ ] = P\\ [\\ \\text{recidivism predicted}\\ |\\ \\text{Caucasian, no recidivism}\\ ], \\text{e.g. } 0.15\n",
"$$\n",
"\n",
"but that also means that African-Americans (and Caucasians) of different subgroup based on training labels don't necessarily have parity:\n",
"\n",
"$$\n",
"P[\\text{recidivism predicted} | \\text{African-American, recidivism}] = 0.95 \\neq 0.15 = P[\\text{recidivism predicted} | \\text{African-American, no recidivism}]\n",
"$$\n",
"\n",
"Assessing which disparity metric is indeed fair varies by application scenario. In this case the evaluation focuses on Equalized Odds, because the recidivism prediction should be accurate for each race, and for each subgroup within. The plot for the training data shows the intended outcome, while the plot for the test data exhibits slight variation which is likely due to randomized predictions as well as a slightly different data distribution."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This wrapper around the unconstrained estimator serves the purpose of mapping the predict\n",
"# method to predict_proba so that we can use real values to get more accurate estimates.\n",
"class LogisticRegressionAsRegression:\n",
" def __init__(self, logistic_regression_estimator):\n",
" self.logistic_regression_estimator = logistic_regression_estimator\n",
" \n",
" def fit(self, X, y):\n",
" self.logistic_regression_estimator.fit(X, y)\n",
" \n",
" def predict(self, X):\n",
" # use predict_proba to get real values instead of 0/1, select only prob for 1\n",
" scores = self.logistic_regression_estimator.predict_proba(X)[:,1]\n",
" return scores"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from fairlearn.postprocessing import ThresholdOptimizer\n",
"from copy import deepcopy\n",
"\n",
"unconstrained_predictor_wrapper = LogisticRegressionAsRegression(unconstrained_predictor)\n",
"postprocessed_predictor_EO = ThresholdOptimizer(\n",
" unconstrained_predictor=unconstrained_predictor_wrapper,\n",
" constraints=\"equalized_odds\")\n",
"\n",
"postprocessed_predictor_EO.fit(X_train, y_train, sensitive_features=sensitive_features_train)\n",
"\n",
"fairness_aware_predictions_EO_train = postprocessed_predictor_EO.predict(X_train, sensitive_features=sensitive_features_train)\n",
"fairness_aware_predictions_EO_test = postprocessed_predictor_EO.predict(X_test, sensitive_features=sensitive_features_test)\n",
"\n",
"# show only test data related plot by default - uncomment the next line to see training data plot as well\n",
"# show_proportions(X_train, sensitive_features_train, fairness_aware_predictions_EO_train, y_train, description=\"equalized odds with postprocessed model on training data:\", plot_row_index=1)\n",
"show_proportions(X_test, sensitive_features_test, fairness_aware_predictions_EO_test, y_test, description=\"equalized odds with postprocessed model on test data:\", plot_row_index=2)\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Post Processing in Detail\n",
"\n",
"While this worked as the numbers show, it's not entirely obvious how it found its solution. The following section will provide a deep dive on post processing for Equalized Odds (EO).\n",
"\n",
"The post processing method (based on work by [Hardt, Price, Srebro](https://arxiv.org/pdf/1610.02413.pdf)) takes a fairness-unaware model and disparity constraints (such as EO) in the constructor and features (X), labels (y), and a sensitive feature (sensitive_features) in the fit method. It subsequently uses the model to make predictions for all samples in X. Note that these predictions could be 0/1 (as in this example), or more categories, or even real valued scores.\n",
"In this case, this looks as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"scores = unconstrained_predictor_wrapper.predict(X_train)\n",
"scores"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Finding threshold rules\n",
"\n",
"The algorithm then tries to find all thresholding rules with which it could divide the data. Any score for which the thresholding rule evaluates to true is predicted to be 1. It does that for each group, so in this case for each race. Depending on the unconstrained predictor's scores you could have lots of thresholding rules, between each set of such values. For each rule we just evaluate the following two probabilities empirically:\n",
"\n",
"$$\n",
"P[\\hat{Y} = 1 | Y = 0] \\text{ which is labeled x below to indicate that it'll be plotted on the x-axis}\\\\\n",
"P[\\hat{Y} = 1 | Y = 1] \\text{ which is labeled y below to indicate that it'll be plotted on the y-axis}\n",
"$$\n",
"\n",
"The former is the false negative rate (of that group), while the latter is the true positive rate (of that group). In this example, the threshold rules would be the ones shown below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from fairlearn.postprocessing._threshold_optimizer import _reformat_and_group_data\n",
"data_grouped_by_sensitive_feature = _reformat_and_group_data(sensitive_features_train, y_train.astype(int), scores)\n",
"data_grouped_by_sensitive_feature.describe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from fairlearn.postprocessing._roc_curve_utilities import _calculate_roc_points\n",
"\n",
"roc_points = {}\n",
"for group_name, group in data_grouped_by_sensitive_feature:\n",
" roc_points[group_name] = _calculate_roc_points(data_grouped_by_sensitive_feature.get_group(group_name), 0)\n",
"print(\"Thresholding rules:\")\n",
"roc_points"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The base points with (x,y) as (0,0) and (1,1) always exist, because that essentially just means that we're predicting everything as 0 or everything as 1 regardless of the scores from the fairness-unaware model. Let's look at both cases:\n",
"- x=0, y=0, threshold rule \">inf\": more than infinity is impossible, which means every sample is predicted as 0. That means $P[\\hat{Y} = 1 | Y = 0] = 0$ (represented as x) because the predictions $\\hat{Y}$ are never 1, and similarly $P[\\hat{Y} = 1 | Y = 1] = 0$ (represented as y).\n",
"- x=1, y=1, threshold rule \">-inf\": more than infinity is always true, which means every sample is predicted as 1. That means $P[\\hat{Y} = 1 | Y = 0] = 1$ (represented as x) because the predictions $\\hat{Y}$ are always 1, and similarly $P[\\hat{Y} = 1 | Y = 1] = 1$ (represented as y).\n",
"\n",
"The more interesting logic happens in between. The x and y values were calculated as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"n_group_0 = {}\n",
"n_group_1 = {}\n",
"for group_name, group in data_grouped_by_sensitive_feature:\n",
" print(\"{}:\".format(group_name))\n",
" n_group_1[group_name] = sum(group[\"label\"])\n",
" n_group_0[group_name] = len(group) - n_group_1[group_name]\n",
" \n",
" print(\" number of samples with label 1: {}\".format(n_group_1[group_name]))\n",
" print(\" number of samples with label 0: {}\".format(n_group_0[group_name]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"threshold = 0.5\n",
"for group_name, group in data_grouped_by_sensitive_feature:\n",
" x_group_0_5 = sum((group[\"score\"] > threshold) & (group[\"label\"] == 0)) / n_group_0[group_name]\n",
" y_group_0_5 = sum((group[\"score\"] > threshold) & (group[\"label\"] == 1)) / n_group_1[group_name]\n",
" print(\"{}:\".format(group_name))\n",
" print(\" P[Ŷ = 1 | Y = 0] = {}\".format(x_group_0_5))\n",
" print(\" P[Ŷ = 1 | Y = 1] = {}\".format(y_group_0_5))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that it never makes sense to have $x>y$ because in that case you're better off flipping labels, i.e. completely turning around the meaning of the scores. The method automatically does that unless specified otherwise.\n",
"\n",
"## Interpolated Predictions and Probabilistic Classifiers\n",
"\n",
"This way you end up with a set of points above the diagonal line connecting (0,0) and (1,1). We calculate the convex hull based on that, because we can reach any point in between two known thresholding points by interpolation. An interpolation could be $p_0 (x_0, y_0) + p_1 (x_1, y_1)$. For the post processing algorithm that would mean that we use the rule defined by $(x_0, y_0, \\text{operation}_0)$ $\\quad p_0$ percent of the time, and the rule defined by $(x_1, y_1, \\text{operation}_1)$ $\\quad p_1$ percent of the time, thus resulting in a probabilistic classifier. Depending on the data certain fairness objectives can only be accomplished with probabilistic classifiers. However, not every use case lends itself to probabilistic classifiers, since it could mean that two people with identical features are classified differently.\n",
"\n",
"## Finding the Equalized Odds solution\n",
"\n",
"From all the ROC points (see below) we need to extract the convex hull for both groups/curves."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"\n",
"for group_name, group in data_grouped_by_sensitive_feature:\n",
" plt.plot(roc_points[group_name].x, roc_points[group_name].y)\n",
"plt.xlabel(\"$P\\ [\\ \\\\hat{Y}=1\\ |\\ Y=0\\ ]$\")\n",
"plt.ylabel(\"$P\\ [\\ \\\\hat{Y}=1\\ |\\ Y=1\\ ]$\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the Equalized Odds case, we need to find a point where the presented probabilities (false positive rate, true positive rate, and thereby also true negative rate and false negative rate) for the corresponding groups match while minimizing the error, which is equivalent to finding the minimum error overlap of the convex hulls of the ROC curves. The error in the chart is smallest in the top left corner. This is done as part of the `fit` step above, and we'll repeat it here for completeness. The yellow area is the overlap between the areas under the curve that are reachable with interpolation for both groups. Of course, this works for more than two groups as well. The result is that we have interpolated solutions for each group, i.e. every prediction is calculated as the weighted result of two threshold rules.\n",
"\n",
"In this particular case the Caucasian curve is always below or matching the African-American curve. That means that the area under the Caucasian curve is also identical to the overlap. That does not always happen, though, and overlaps can be fairly complex with multiple intersecting curves defining its outline.\n",
"\n",
"We can actually even look up the specific interpolations and interpret the results. Keep in mind that these interpolations come up with a floating point number between 0 and 1, and represent the probability of getting 0 or 1 in the predicted outcome.\n",
"\n",
"The result for African-Americans is a combination of two thresholding rules. The first rule checks whether the score is above 0.542, the other whether it is above 0.508. The first rule has a weight of 0.19, which means it determines 19% of the resulting probability. The second rule determines the remaining 81%. In the chart the Caucasian curve is below the African-American curve at the EO solution. This means that there is a slight adjustment according to the formula presented below.\n",
"\n",
"The Caucasian rules have somewhat lower thresholds: The first rule's threshold is >0.421 and it is basically the deciding factor with a weight of 99.3%, while the second rule's threshold is >0.404.\n",
"\n",
"Overall, this means that the postprocessing algorithm learned to get probabilities consistent with Equalized Odds and minimal error by setting lower thresholds for Caucasians than for African-Americans. The resulting probability from the formula below is then the probability to get label 1 from the probabilistic classifier.\n",
"\n",
"Note that this does not necessarily mean it's fair. It simply enforced the constraints we asked it to enforce, as described by Equalized Odds. The societal context plays a crucial role in determining whether this is fair.\n",
"\n",
"The parameters `p_ignore` and `prediction_constant` are irrelevant for cases where the curves intersect in the minimum error point. When that doesn't happen, and the minimum error point is only part of one curve, then the interpolation is adjusted as follows\n",
"```\n",
"p_ignore * prediction_constant + (1 - p_ignore) * (p0 * operation0(score) + p1 * operation1(score))\n",
"```\n",
"The adjustment should happen to the higher one of the curves and essentially brings it closer to the diagonal as represented by `prediction_constant`. In this case this is not required since the curves intersect, but we are actually slightly inaccurate because we only determine the minimum error point on a grid of x values, instead of calculating the intersection point analytically. By choosing a large `grid_size` this can be alleviated."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"postprocessed_predictor_EO._plot = True\n",
"postprocessed_predictor_EO.fit(X_train, y_train, sensitive_features=sensitive_features_train)\n",
"\n",
"for group, interpolation in postprocessed_predictor_EO._post_processed_predictor_by_sensitive_feature.items():\n",
" print(\"{}:\".format(group))\n",
" print(\"\\n \".join(interpolation.__repr__().split(',')))\n",
" print(\"-----------------------------------\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"file_extension": ".py",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
},
"mimetype": "text/x-python",
"name": "python",
"npconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": 3
},
"nbformat": 4,
"nbformat_minor": 2
}
|