convert-to-parquet
Summary
• Switched GAIA to Parquet-based splits because code-based loaders are deprecated in datasets 4.x.
• Kept attachments (PDF/PNG/CSV/…) as files on disk.
• Kept a single file_path (or file_name) column; mixed file types can’t be auto-decoded from one column, so users still open files manually.
Layout
2023/
validation/
metadata.parquet
<attachments...>
test/
metadata.parquet
<attachments...>
Loading
from datasets import load_dataset
ds = load_dataset("gaia-benchmark/GAIA", "2023_level1", split="test")
for ex in ds:
path = ex.get("file_path") or ex.get("file_name")
# open path as needed (local or via hf_hub_download)
Notes
• Existing configs retained: 2023_all, 2023_level1, 2023_level2, 2023_level3.
• If we ever want auto-decoding, we can add optional typed columns (image/audio/pdf) alongside file_path.
Hi there! I'm one of the maintainers of inspect_evals and our implementation of GAIA depends on this dataset. Would it be possible to get this PR merged? Thanks!
Hi @celia-waggoner-aset , I would love to merge this PR but I'm waiting for the owner of this dataset to confirm. @clefourrier is it ok to merge this PR?
Hi! Have not had the time to check it yet - will try to take a look today else Monday
Hi! Thanks again for the PR. I'm facing several issues.
I tried loading your version of the dataset and
- it does not seem to download the attachments (I checked locally where I was working and in the cache)
- the harcoded paths do not seem to point to anything either
- you removed a lot of information from the README
Can you please provide me with the precise command you used to download both the parquet and the files, using datasets
? Did you check where the attachments were downloaded?
We'll also likely need to add some info to the readme with said command to make sure people can easily access everything.
@clefourrier
Thanks for reviewing this PR!
To be honest, the dataset is not highly compatible with parquet, because GAIA has so many different file formats and HF dataset doesn't support a union of different file types, and some of them are not even supported (.xlsx, .pptx etc). That's why I kept the attachments in the drive and specified the hardcoded file path. If the attachments are not downloaded, that must be my bug (I'll look into this today), but we still have to load the attachments manually. I was going through HF dataset docs for few hours, but this was the cleanest solution I could come with. If someone knows the better solution to this, please let me know!
So, I'll fix the issue where attachments are not downloaded, update README.md with an example code how to load the data.
You can make it work in two steps:
- download the repository using
huggingface_hub
- load the parquet data containing links to the documents
Here is how you could use it:
import os
from datasets import load_dataset
from huggingface_hub import snapshot_download
data_dir = snapshot_download(repo_id="gaia-benchmark/GAIA", repo_type="dataset")
dataset = load_dataset(data_dir, "2023_level1", split="test")
for example in dataset:
question = example["question"]
file_path = os.path.join(data_dir, example["file_path"])
Therefore all we have to do in this PR is making sure the "file_path" column in Parquet contains the path of the file in the repository
So there is no way to have it in one step with datasets
?
(thanks a lot for the answer :) )
Not at the moment, since it doesn't have any logic to handle repos with unknown file types