Initial commit of README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,71 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pretty_name: Shopping Queries Image Dataset (SQID 🦑)
|
6 |
+
size_categories:
|
7 |
+
- 100K<n<1M
|
8 |
+
---
|
9 |
+
# Shopping Queries Image Dataset (SQID 🦑): An Image-Enriched ESCI Dataset for Exploring Multimodal Learning in Product Search
|
10 |
+
|
11 |
+
|
12 |
+
## Introduction
|
13 |
+
The Shopping Queries Image Dataset (SQID) is a dataset that includes image information for over 190,000 products. This dataset is an augmented version of the [Amazon Shopping Queries Dataset](https://github.com/amazon-science/esci-data), which includes a large number of product search queries from real Amazon users, along with a list of up to 40 potentially relevant results and judgments of how relevant they are to the search query.
|
14 |
+
|
15 |
+
The image-enriched SQID dataset can be used to support research on improving product search by leveraging image information. Researchers can use this dataset to train multimodal machine learning models that can take into account both textual and visual information when ranking products for a given search query.
|
16 |
+
|
17 |
+
## Dataset
|
18 |
+
This dataset extends the Shopping Queries Dataset (SQD) by including image information and visual embeddings for each product, as well as text embeddings for the associated queries which can be used for baseline product ranking benchmarking.
|
19 |
+
|
20 |
+
### Product Sampling
|
21 |
+
We limited this dataset to the subset of the SQD where `small_version` is 1 (the reduced version of the dataset for Task 1) and `product_locale` is 'us'.
|
22 |
+
|
23 |
+
Products were further sampled to meet **either** of the following criteria:
|
24 |
+
|
25 |
+
| Criteria | Description | Unique `product_id`'s |
|
26 |
+
|---|---|---|
|
27 |
+
| `split` = 'test' | Products from the SQD Task 1 Test set | 164,900 |
|
28 |
+
| `product_id` appeared in at least 2 query judgments | Products referenced multiple times in the Task 1 set, not counting those in the 'test' split | 27,139 |
|
29 |
+
|
30 |
+
Hence, this dataset includes 192,039 unique `product_id`s in total.
|
31 |
+
|
32 |
+
## Image URL Scraping:
|
33 |
+
|
34 |
+
We scraped 182,774 (95% of the 192,039 `product_id`'s) `image_url`s from the Amazon website. Products lacking `image_url`s either failed to fetch a valid product page (usually if Amazon no longer sells the product) or displayed a default "No image available" image.
|
35 |
+
|
36 |
+
Note: 446 product `image_url`s are a default digital video image, `'https://m.media-amazon.com/images/G/01/digital/video/web/Default_Background_Art_LTR._SX1080_FMjpg_.jpg'`, implying no product-specific image exists.
|
37 |
+
|
38 |
+
### Image Embeddings:
|
39 |
+
|
40 |
+
We extracted image embeddings for each of the images using the [OpenAI CLIP model from HuggingFace](https://huggingface.co/openai/clip-vit-large-patch14), specifically clip-vit-large-patch14, with all default settings.
|
41 |
+
|
42 |
+
### Query Embeddings:
|
43 |
+
|
44 |
+
For each query in the SQD Test Set, we extracted text embeddings using the same CLIP model. These can be useful to benchmark a baseline product search method where both text and images share the same embedding space.
|
45 |
+
|
46 |
+
## Files
|
47 |
+
The `sqid` directory contains 3 files:
|
48 |
+
- `product_image_urls.csv`
|
49 |
+
- This file contains the image URLs for all product_id's in the dataset
|
50 |
+
- `products_features.parquet`
|
51 |
+
- This file contains the CLIP embedding features for all product_id's in the dataset
|
52 |
+
- `queries_features.parquet`
|
53 |
+
- This file contains the CLIP text embedding features for all queries in the dataset
|
54 |
+
|
55 |
+
## Citation
|
56 |
+
To use this dataset, please cite the following paper:
|
57 |
+
<pre>
|
58 |
+
Shopping Queries Image Dataset (SQID): An Image-Enriched ESCI Dataset for Exploring Multimodal Learning in Product Search, M. Al Ghossein, C.W. Chen, J. Tang
|
59 |
+
</pre>
|
60 |
+
|
61 |
+
|
62 |
+
## License
|
63 |
+
This dataset is released under the MIT License
|
64 |
+
|
65 |
+
## Acknowledgments
|
66 |
+
SQID was developed at [Crossing Minds](www.crossingminds.com) by:
|
67 |
+
- [Marie Al Ghossein](https://www.linkedin.com/in/mariealghossein/)
|
68 |
+
- [Ching-Wei Chen](https://www.linkedin.com/in/cweichen)
|
69 |
+
- [Jason Tang](https://www.linkedin.com/in/jasonjytang/)
|
70 |
+
|
71 |
+
This dataset would not have been possible without the amazing [Shopping Queries Dataset by Amazon](https://github.com/amazon-science/esci-data).
|