Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,7 @@ This dataset is released by Criteo to foster research and innovation on Fairness
|
|
14 |
See also [Criteo pledge for Fairness in Advertising](https://fr.linkedin.com/posts/diarmuid-gill_advertisingfairness-activity-6945003669964660736-_7Mu).
|
15 |
|
16 |
The dataset is intended to learn click predictions models and evaluate by how much their predictions are biased between different gender groups.
|
|
|
17 |
|
18 |
## License
|
19 |
|
@@ -61,7 +62,22 @@ We encourage research in Fairness defined with respect to other attributes as we
|
|
61 |
|
62 |
### Limitations and interpretations
|
63 |
|
64 |
-
We remark that the proposed gender proxy does not give a definition of the gender.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
## Metrics
|
67 |
|
@@ -82,16 +98,19 @@ This implements
|
|
82 |
- a logistic regression with embeddings for categorical features (largely unfair and useful)
|
83 |
- a "fair" logistic regression (relatively fair and useful)
|
84 |
|
85 |
-
The "fair" logistic regression is based on the method proposed by [Bechavod
|
86 |
|
87 |
## Citation
|
88 |
|
89 |
If you use the dataset in your research please cite it using the following Bibtex excerpt:
|
90 |
|
91 |
```
|
92 |
-
@misc{
|
93 |
-
|
94 |
-
|
95 |
-
year
|
96 |
-
|
|
|
|
|
|
|
97 |
```
|
|
|
14 |
See also [Criteo pledge for Fairness in Advertising](https://fr.linkedin.com/posts/diarmuid-gill_advertisingfairness-activity-6945003669964660736-_7Mu).
|
15 |
|
16 |
The dataset is intended to learn click predictions models and evaluate by how much their predictions are biased between different gender groups.
|
17 |
+
The associated paper is available at [Vladimirova et al. 2024](https://arxiv.org/pdf/2407.03059).
|
18 |
|
19 |
## License
|
20 |
|
|
|
62 |
|
63 |
### Limitations and interpretations
|
64 |
|
65 |
+
We remark that the proposed gender proxy does not give a definition of the gender.
|
66 |
+
Since we do not have access to the sensitive information, this is the best solution we have identified at this stage to idenitify bias on pseudonymised data, and we encourage any discussion on better approximations.
|
67 |
+
This proxy is reported as binary for simplicity yet we acknowledge gender is not necessarily binary. Although our research focuses on gender, this should not diminish the importance of investigating other types of algorithmic discrimination.
|
68 |
+
While this dataset provides important application of fairness-aware algorithms in a high-risk domain, there are several fundamental limitation that can not be addressed easily through data collection or curation processes.
|
69 |
+
These limitations include historical bias that affect a positive outcome for a given user, as well as the impossibility to verify how close the gender-proxy is to the real gender value.
|
70 |
+
Additionally, there might be bias due to the market unfairness.
|
71 |
+
Such limitations and possible ethical concerns about the task should be taken into account while drawing conclusions from the research using this dataset.
|
72 |
+
Readers should not interpret summary statistics of this dataset as ground truth but rather as characteristics of the dataset only.
|
73 |
+
|
74 |
+
## Challenges
|
75 |
+
|
76 |
+
The first challenge comes from handling the different types of data that are common in tables, the mixed-type columns: there are both numerical and categorical features that have to be embedded [Gorishniy et al., 2021, 2022, Grinsztajn et al., 2022, Shwartz-Ziv and Armon, 2022, Matteucci et al., 2023].
|
77 |
+
In addition, some of the features have long-tail phenomenon and products have popularity bias. Our datasets contains more than 1,000,000 lines, while current high-performing models are under-explored in scale, e.g. the largest datasets in Grinsztajn et al. [2022] are only 50,000 lines, while in Gorishniy et al. [2021, 2022] only one dataset surpasses 1,000,000 lines.
|
78 |
+
Additional challenge comes from strongly imbalanced data: the positive class proportion in our data is less than 0.007 that leads to challenges in training robust and fair machine learning models [Jesus et al., 2022, Yang et al., 2024]. In our dataset there is no significant imbalances in demographic groups users regarding the protected attribute (both genders are sub-sampled with 0.5 proportion, female profile users were shown less job ad with 0.4 proportion and slightly less senior position jobs with 0.48 proportion), however, there could be a hidden effect of a selection bias.
|
79 |
+
This poses a problem in accurately assessing model performance [van Breugel et al., 2024].
|
80 |
+
More detailed statistics and exploratory analysis are referred to the supplemental material of the associated paper linked below.
|
81 |
|
82 |
## Metrics
|
83 |
|
|
|
98 |
- a logistic regression with embeddings for categorical features (largely unfair and useful)
|
99 |
- a "fair" logistic regression (relatively fair and useful)
|
100 |
|
101 |
+
The "fair" logistic regression is based on the method proposed by [Bechavod et al. 2017](https://arxiv.org/abs/1707.00044).
|
102 |
|
103 |
## Citation
|
104 |
|
105 |
If you use the dataset in your research please cite it using the following Bibtex excerpt:
|
106 |
|
107 |
```
|
108 |
+
@misc{vladimirova2024fairjob,
|
109 |
+
title={FairJob: A Real-World Dataset for Fairness in Online Systems},
|
110 |
+
author={Mariia Vladimirova and Federico Pavone and Eustache Diemert},
|
111 |
+
year={2024},
|
112 |
+
eprint={2407.03059},
|
113 |
+
archivePrefix={arXiv},
|
114 |
+
url={https://arxiv.org/abs/2407.03059},
|
115 |
+
|
116 |
```
|