Datasets:
File size: 8,522 Bytes
367abf4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 |
---
license: cc-by-sa-4.0
size_categories:
- 10M<n<100M
---
# Dataset Documentation
## Private Bidding Optimisation {#private-conversion-optimisation}
The advertising industry lacks a common benchmark to assess the privacy
/ utility trade-off in private advertising systems. To fill this gap, we
are open-sourcing CriteoPrivateAd, the largest real-world anonymised
bidding dataset, in terms of number of features. This dataset enables
engineers and researchers to:
- assess the impact of removing cross-domain user signals,
highlighting the effects of third-party cookie deprecation;
- design and test private bidding optimisation approaches using
contextual signals and user features;
- evaluate the relevancy of answers provided by aggregation APIs for
bidding model learning.
## Summary
This dataset is released by Criteo to foster research and industrial
innovation on privacy-preserving machine learning applied to a major
advertising use-case, namely bid optimisation under user signal loss /
obfuscation.
This use-case is inspired by challenges both browser vendors and AdTech
companies are facing due to third-party cookie deprecation, such as
ensuring a viable cookie-less advertising business via a pragmatic
performance / privacy trade-off. In particular, we are expecting to see
improvements of Google Chrome Privacy Sandbox and Microsoft Ad Selection
APIs via offline benchmarks based on this dataset.
The dataset contains an anonymised log aiming to mimic production
performance of AdTech bidding engines, so that offline results based on
this dataset could be taken as ground truth to improve online
advertising performance under privacy constraints. Features are grouped
into several groups depending on their nature, envisioned privacy
constraints and availability at inference time.
Based on this dataset, the intended objective is to implement privacy
constraints (e.g. by aggregating labels or by adding differential
privacy to features and/or labels) and then learn click and conversion
(e.g. sales) prediction models.
The associated paper is available [here](https://arxiv.org/abs/2502.12103)
As a leading AdTech company that drives commerce outcomes for media
owners and marketers, Criteo is committed to evaluating proposals that
might affect the way we will perform attribution, reporting and campaign
optimisation in the future. Criteo has already participated in testing
and providing feedback on browser proposals such as the Privacy Sandbox
one; see all our [Medium articles](https://techblog.criteo.com) Back in 2021, we also
organised a public challenge aiming to assess bidding performance when
learning on aggregated data: our learnings are available [here](https://arxiv.org/abs/2201.13123).
## Dataset Description
This dataset represents a 100M anonymised sample of 30 days of Criteo
live data retrieved from third-party cookie traffic on Chrome. Each line corresponds to one impression (a banner)
that was displayed to a user. For each impression, we are providing:
- campaign x publisher x (user x day) granularity with respective ids, to match Chrome Privacy Sandbox scenarios and both
display and user-level privacy.
- 4 labels (click, click leading to a landing on an advertiser
website, click leading to a visit on an advertiser website -
i.e. landing followed by one advertiser event, number of sales
attributed to the clicked display).
- more than 100 features grouped in 5 buckets with respect to their
logging and inference constraints in Protected Audience API from
Chrome Privacy Sandbox (note that these buckets are generic enough
to cover other private advertising frameworks as we are mainly
providing a split between ad campaign features, single-domain &
cross-domain user features, and contextual features) :
- Features available in the key-value server with 12-bit logging
constraint (i.e. derived from current version of modelingSignals
and standing for single-domain user features).
- Features available in the key-value server with no logging
constraint (i.e. derived from Interest Group name / renderURL).
- Features available in browser with 12-bit constraint
(i.e. cross-domain features available in generateBid).
- Features from contextual call with no logging constraint
(i.e. contextual features).
- Features not available (i.e. cross-device and cross-domain
ones).
- `day_int` enabling (1) splitting the log into training, validation
and testing sets; (2) performing relevant model seeding.
- Information about conversion delay to simulate the way Privacy Sandbox APIs are working.
- `time_between_request_timestamp_and_post_display_event` (column name
in clear): time delta (in minutes) between the request timestamp and the
click or sale event. All displays are considered starting the day of
the event at 00:00 to avoid providing complete timelines.
- We include a display order from 1 to K for display on the same day
for the same user.
CriteoPrivateAd is split into 30 parquets (one per day from 1 to 30) in day_int={i} directory.
The displays-per-user histograms can be deduced from event_per_user_contribution.csv,< it is useful to build importance sampling ratios and user-level DP.
Please, see the companion paper for more details.
A precise description of the dataset and each column is available in [the
companion paper](https://arxiv.org/abs/2502.12103)
## Metrics
The metrics best suited to the click and conversion estimation problems
are:
- the log-likelihood (LLH), and preferably a rescaled version named LLH-CompVN defined
as the relative log-likelihood uplift compared to the naive model
always predicting the average label in the training dataset;
- calibration, defined as the ratio between the sum of the predictions
and the sum of the validation labels. It must be close to 1 for a
bidding application;
We would like to point out that conventional classification measures
such as area under the curve (AUC) are less relevant for comparing
auction models.
The click-through rate is higher than the one encountered in real-world
advertising systems on the open internet. To design realistic bidding
applications, one must use a weighted loss for validation. We defer the
interested readers to the [associated companion paper](https://arxiv.org/abs/2502.12103) for more details
## Baselines
The Training period has been fixed to 1->25 and Validation period to 26->30. The chosen loss is the LLH-CompVN with weighting as defined above. The Sales | Display is a product of the Landed Click | Display and the Sales | Landed Click.
| Task/CTR | 0.1% | 0.5% | 1% |
|-------------------------|-------|-------|-------|
| Landed Click \| Display | 0.170 | 0.186 | 0.234 |
| Sales \| Landed Click | 0.218 | 0.218 | 0.218 |
| Sales \| Display | 0.171 | 0.187 | 0.237 |
Note that our baseline results might be difficult to achieve because of the anonymisation of the dataset.
## License
The data is released under the license. You are free to
Share and Adapt this data provided that you respect the Attribution and
ShareAlike conditions. Please read carefully the full license before
using.
## Citation
If you use the dataset in your research please cite it using the
following Bibtex excerpt:
@misc{sebbar2025criteoprivateadrealworldbiddingdataset,
title={CriteoPrivateAd: A Real-World Bidding Dataset to Design Private Advertising Systems},
author={Mehdi Sebbar and Corentin Odic and Mathieu Léchine and Aloïs Bissuel and Nicolas Chrysanthos and Anthony D'Amato and Alexandre Gilotte and Fabian Höring and Sarah Nogueira and Maxime Vono},
year={2025},
eprint={2502.12103},
archivePrefix={arXiv},
primaryClass={cs.CR},
url={https://arxiv.org/abs/2502.12103},
}
## Acknowledgment
We would like to thank:
- Google Chrome Privacy Sandbox team, especially Charlie Harrisson,
for feedbacks on the usefulness of this dataset.
- W3C PATCG group, notably for their public data requests to foster
work on the future of attribution and reporting.
- Criteo stakeholders who took part of this dataset release: Anthony
D'Amato, Mathieu Léchine, Mehdi Sebbar, Corentin Odic, Maxime Vono,
Camille Jandot, Fatma Moalla, Nicolas Chrysanthos, Romain Lerallut,
Alexandre Gilotte, Aloïs Bissuel, Lionel Basdevant, Henry Jantet. |