Datasets:

Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
cwchen-cm commited on
Commit
9444560
·
verified ·
1 Parent(s): 221f0d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -17
README.md CHANGED
@@ -81,39 +81,67 @@ The image-enriched SQID dataset can be used to support research on improving pro
81
  This dataset extends the Shopping Queries Dataset (SQD) by including image information and visual embeddings for each product, as well as text embeddings for the associated queries which can be used for baseline product ranking benchmarking.
82
 
83
  ### Product Sampling
84
- We limited this dataset to the subset of the SQD where `small_version` is 1 (the reduced version of the dataset for Task 1) and `product_locale` is 'us'.
85
-
86
- Products were further sampled to meet **either** of the following criteria:
87
-
88
- | Criteria | Description | Unique `product_id`'s |
89
- |---|---|---|
90
- | `split` = 'test' | Products from the SQD Task 1 Test set | 164,900 |
91
- | `product_id` appeared in at least 2 query judgments | Products referenced multiple times in the Task 1 set, not counting those in the 'test' split | 27,139 |
92
-
93
- Hence, this dataset includes 192,039 unique `product_id`s in total.
94
 
 
95
  ## Image URL Scraping:
96
 
97
- We scraped 182,774 (95% of the 192,039 `product_id`'s) `image_url`s from the Amazon website. Products lacking `image_url`s either failed to fetch a valid product page (usually if Amazon no longer sells the product) or displayed a default "No image available" image.
98
 
99
  Note: 446 product `image_url`s are a default digital video image, `'https://m.media-amazon.com/images/G/01/digital/video/web/Default_Background_Art_LTR._SX1080_FMjpg_.jpg'`, implying no product-specific image exists.
100
 
 
 
101
  ### Image Embeddings:
102
 
103
  We extracted image embeddings for each of the images using the [OpenAI CLIP model from HuggingFace](https://huggingface.co/openai/clip-vit-large-patch14), specifically clip-vit-large-patch14, with all default settings.
104
 
105
  ### Query Embeddings:
106
 
107
- For each query in the SQD Test Set, we extracted text embeddings using the same CLIP model. These can be useful to benchmark a baseline product search method where both text and images share the same embedding space.
108
 
109
  ## Files
110
- The `sqid` directory contains 3 files:
111
- - `product_image_urls.csv`
112
- - This file contains the image URLs for all product_id's in the dataset
113
  - `products_features.parquet`
114
- - This file contains the CLIP embedding features for all product_id's in the dataset
115
  - `queries_features.parquet`
116
- - This file contains the CLIP text embedding features for all queries in the dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
  ## Citation
119
  To use this dataset, please cite the following paper:
 
81
  This dataset extends the Shopping Queries Dataset (SQD) by including image information and visual embeddings for each product, as well as text embeddings for the associated queries which can be used for baseline product ranking benchmarking.
82
 
83
  ### Product Sampling
84
+ We limited this dataset to the subset of the SQD where `small_version` is 1 (the reduced version of the dataset for Task 1), `split` is 'test' (test set of the dataset), and `product_locale` is 'us'. Hence, this dataset includes 164,900 `product_id`'s
 
 
 
 
 
 
 
 
 
85
 
86
+ As supplementary data, we also provide data related to the other products appearing in at least 2 query judgements in the data of Task 1 with `product_locale` as 'us', amounting to 27,139 products, to further increase the coverage of the data for additional applications that go beyond the ESCI benchmark.
87
  ## Image URL Scraping:
88
 
89
+ We scraped 156,545 (95% of the 164,900 `product_id`'s) `image_url`s from the Amazon website. Products lacking `image_url`s either failed to fetch a valid product page (usually if Amazon no longer sells the product) or displayed a default "No image available" image.
90
 
91
  Note: 446 product `image_url`s are a default digital video image, `'https://m.media-amazon.com/images/G/01/digital/video/web/Default_Background_Art_LTR._SX1080_FMjpg_.jpg'`, implying no product-specific image exists.
92
 
93
+ The dataset also includes a supplementary file covering 27,139 more `product_id`'s and `image_url`'s.
94
+
95
  ### Image Embeddings:
96
 
97
  We extracted image embeddings for each of the images using the [OpenAI CLIP model from HuggingFace](https://huggingface.co/openai/clip-vit-large-patch14), specifically clip-vit-large-patch14, with all default settings.
98
 
99
  ### Query Embeddings:
100
 
101
+ For each query and each product in the SQD Test Set, we extracted text embeddings using the same CLIP model and based on the query text and product title. These can be useful to benchmark a baseline product search method where both text and images share the same embedding space.
102
 
103
  ## Files
104
+ The `data` directory contains 4 files:
105
+ - `product_image_urls.parquet`
106
+ - This file contains the image URLs for all `product_id`'s in the dataset
107
  - `products_features.parquet`
108
+ - This file contains the CLIP embedding features for all `product_id`'s in the dataset
109
  - `queries_features.parquet`
110
+ - This file contains the CLIP text embedding features for all `querie_id`'s in the dataset
111
+ - `supp_product_image_urls.parquet`
112
+ - This file contains supplementary data as image URLs for an additional set of products not included in the test set and increasing the coverage of the data
113
+
114
+ ## Code snippets to get CLIP features
115
+
116
+ SQID includes embeddings extracted using [OpenAI CLIP model from HuggingFace](https://huggingface.co/openai/clip-vit-large-patch14) (clip-vit-large-patch14). We provide below code snippets in Python to extract such embeddings, using either the model from HuggingFace or using [Replicate](https://replicate.com/).
117
+
118
+ ### Using CLIP model from HuggingFace
119
+
120
+ ```
121
+ from PIL import Image
122
+ import requests
123
+ from transformers import CLIPModel, CLIPProcessor
124
+
125
+ model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
126
+ processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
127
+
128
+ image = Image.open(requests.get('https://m.media-amazon.com/images/I/71fv4Dv5RaL._AC_SY879_.jpg', stream=True).raw)
129
+ inputs = processor(images=[image], return_tensors="pt", padding=True)
130
+ image_embds = model.get_image_features(pixel_values=inputs["pixel_values"])
131
+ ```
132
+
133
+ ### Using Replicate
134
+ ```
135
+ import replicate
136
+
137
+ client = replicate.Client(api_token=REPLICATE_API_KEY)
138
+ output = client.run(
139
+ "andreasjansson/clip-features:71addf5a5e7c400e091f33ef8ae1c40d72a25966897d05ebe36a7edb06a86a2c",
140
+ input={
141
+ "inputs": 'https://m.media-amazon.com/images/I/71fv4Dv5RaL._AC_SY879_.jpg'
142
+ }
143
+ )
144
+ ```
145
 
146
  ## Citation
147
  To use this dataset, please cite the following paper: