url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 51
51
| id
int64 1.92B
2.7B
| node_id
stringlengths 18
18
| number
int64 6.27k
7.3k
| title
stringlengths 2
150
| user
dict | labels
listlengths 0
2
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
1
| milestone
null | comments
sequencelengths 0
23
| created_at
timestamp[ns] | updated_at
int64 1.7k
1.73k
| closed_at
timestamp[ns] | author_association
stringclasses 4
values | active_lock_reason
null | body
stringlengths 3
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
null | pull_request
null | is_pull_request
bool 1
class | time_to_close
float64 0
0
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6530/comments | https://api.github.com/repos/huggingface/datasets/issues/6530/events | https://github.com/huggingface/datasets/issues/6530 | 2,054,817,609 | I_kwDODunzps56egdJ | 6,530 | Impossible to save a mapped dataset to disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I solved it with `train_dataset.with_format(None)`\r\nBut then faced some more issues (which i later solved too).\r\n\r\nHuggingface does not seem to care, so I do. Here is an updated training script which saves a pre-processed (mapped) dataset to your local directory if you specify `--save_precomputed_data_dir=DIR_NAME`. Then use `--train_precomputed_data_dir` with the same dir to load it instead of `--dataset_name`.\r\n\r\n[Proper SDXL trainer code](https://github.com/kopyl/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py)\r\n[Notebook for pre-computing a dataset and saving locally](https://colab.research.google.com/drive/17Yo08hePx-NlHs99RecdeiNc8CQg4O7l?usp=sharing)\r\n\r\nExample:\r\n\r\n1st run (nothing is pre-computed yet):\r\n```\r\naccelerate launch train_text_to_image_sdxl.py \\\r\n --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \\\r\n --pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \\\r\n --dataset_name=lambdalabs/pokemon-blip-captions \\\r\n --save_precomputed_data_dir=\"test-5\"\r\n```\r\n\r\n2nd run (the pre-computed dataset is saved to `test-5` directory):\r\n```\r\naccelerate launch train_text_to_image_sdxl.py \\\r\n --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \\\r\n --pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \\\r\n --train_precomputed_data_dir test-5\r\n```\r\n\r\nThis way when you're gonna be using your pre-computed dataset you don't need to worry about re-mapping your dataset when you change an argument for your trainer script"
] | 1970-01-01T00:00:00.000001 | 1,703 | null | NONE | null | ### Describe the bug
I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py).
After I do the mapping like this:
```
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True)
train_dataset = train_dataset.map(
compute_vae_encodings_fn,
batched=True,
batch_size=16,
)
```
and try to save it like this:
`train_dataset.save_to_disk("test")`
i get this error ([full traceback](https://pastebin.com/kq3vt739)):
```
TypeError: Object of type function is not JSON serializable
The format kwargs must be JSON serializable, but key 'transform' isn't.
```
But what is interesting is that pushing to hub works like that:
`train_dataset.push_to_hub("kopyl/mapped-833-icons-sdxl-1024-dataset", token=True)`
Here is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset
### Steps to reproduce the bug
Here is the self-contained notebook:
https://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing
### Expected behavior
It should be easily saved to disk
### Environment info
NVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2.
[pip freeze](https://pastebin.com/QTNb6iru) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6530/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6530/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6529/comments | https://api.github.com/repos/huggingface/datasets/issues/6529/events | https://github.com/huggingface/datasets/issues/6529 | 2,054,209,449 | I_kwDODunzps56cL-p | 6,529 | Impossible to only download a test split | {
"avatar_url": "https://avatars.githubusercontent.com/u/28439529?v=4",
"events_url": "https://api.github.com/users/ysig/events{/privacy}",
"followers_url": "https://api.github.com/users/ysig/followers",
"following_url": "https://api.github.com/users/ysig/following{/other_user}",
"gists_url": "https://api.github.com/users/ysig/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ysig",
"id": 28439529,
"login": "ysig",
"node_id": "MDQ6VXNlcjI4NDM5NTI5",
"organizations_url": "https://api.github.com/users/ysig/orgs",
"received_events_url": "https://api.github.com/users/ysig/received_events",
"repos_url": "https://api.github.com/users/ysig/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ysig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysig/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ysig",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The only way right now is to load with streaming=True",
"This feature has been proposed for a long time. I'm looking forward to the implementation. On clusters `streaming=True` is not an option since we do not have Internet on compute nodes. See: https://github.com/huggingface/datasets/discussions/1896#discussioncomment-2359593"
] | 1970-01-01T00:00:00.000001 | 1,706 | null | NONE | null | I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function.
Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed before! split is passed to the dataset builder in `as_dataset`.
If I'm not missing something, this seems like bad design, for the following use case:
> Imagine there is a huge dataset that has an evaluation test set and you want to just download and run just to compare your method.
Is there a current workaround that can help me achieve the same result?
Thank you, | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6529/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6529/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6524/comments | https://api.github.com/repos/huggingface/datasets/issues/6524/events | https://github.com/huggingface/datasets/issues/6524 | 2,053,076,311 | I_kwDODunzps56X3VX | 6,524 | Streaming the Pile: Missing Files | {
"avatar_url": "https://avatars.githubusercontent.com/u/23347756?v=4",
"events_url": "https://api.github.com/users/FelixLabelle/events{/privacy}",
"followers_url": "https://api.github.com/users/FelixLabelle/followers",
"following_url": "https://api.github.com/users/FelixLabelle/following{/other_user}",
"gists_url": "https://api.github.com/users/FelixLabelle/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FelixLabelle",
"id": 23347756,
"login": "FelixLabelle",
"node_id": "MDQ6VXNlcjIzMzQ3NzU2",
"organizations_url": "https://api.github.com/users/FelixLabelle/orgs",
"received_events_url": "https://api.github.com/users/FelixLabelle/received_events",
"repos_url": "https://api.github.com/users/FelixLabelle/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FelixLabelle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FelixLabelle/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FelixLabelle",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Hello @FelixLabelle,\r\n\r\nAs you can see in the Community tab of the corresponding dataset, it is a known issue: https://huggingface.co/datasets/EleutherAI/pile/discussions/15\r\n\r\nThe data has been taken down due to reported copyright infringement.\r\n\r\nFeel free to continue the discussion there."
] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The pile does not stream, a "File not Found error" is returned. It looks like the Pile's files have been moved.
### Steps to reproduce the bug
To reproduce run the following code:
```
from datasets import load_dataset
dataset = load_dataset('EleutherAI/pile', 'en', split='train', streaming=True)
next(iter(dataset))
```
I get the following error:
`FileNotFoundError: https://the-eye.eu/public/AI/pile/train/00.jsonl.zst`
### Expected behavior
Return the data in a stream.
### Environment info
- `datasets` version: 2.12.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6524/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6524/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6522/comments | https://api.github.com/repos/huggingface/datasets/issues/6522/events | https://github.com/huggingface/datasets/issues/6522 | 2,052,332,528 | I_kwDODunzps56VBvw | 6,522 | Loading HF Hub Dataset (private org repo) fails to load all features | {
"avatar_url": "https://avatars.githubusercontent.com/u/6579034?v=4",
"events_url": "https://api.github.com/users/versipellis/events{/privacy}",
"followers_url": "https://api.github.com/users/versipellis/followers",
"following_url": "https://api.github.com/users/versipellis/following{/other_user}",
"gists_url": "https://api.github.com/users/versipellis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/versipellis",
"id": 6579034,
"login": "versipellis",
"node_id": "MDQ6VXNlcjY1NzkwMzQ=",
"organizations_url": "https://api.github.com/users/versipellis/orgs",
"received_events_url": "https://api.github.com/users/versipellis/received_events",
"repos_url": "https://api.github.com/users/versipellis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/versipellis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versipellis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/versipellis",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,703 | null | NONE | null | ### Describe the bug
When pushing a `Dataset` with multiple `Features` (`input`, `output`, `tags`) to Huggingface Hub (private org repo), and later downloading the `Dataset`, only `input` and `output` load - I believe the expected behavior is for all `Features` to be loaded by default?
### Steps to reproduce the bug
Pushing the data. `data_concat` is a `list` of `dict`s.
```python
for datum in data_concat:
datum_tags = {d["key"]: d["value"] for d in datum["tags"]}
split_fraction = # some logic that generates a train/test split number
if split_faction < test_fraction:
data_test.append(datum)
else:
data_train.append(datum)
dataset = DatasetDict(
{
"train": Dataset.from_list(data_train),
"test": Dataset.from_list(data_test),
"full": Dataset.from_list(data_concat),
},
)
dataset_shuffled = dataset.shuffle(seed=shuffle_seed)
dataset_shuffled.push_to_hub(
repo_id=hf_repo_id,
private=True,
config_name=m,
revision=revision,
token=hf_token,
)
```
Loading it later:
```python
dataset = datasets.load_dataset(
path=hf_repo_id,
name=name,
token=hf_token,
)
```
Produces:
```
DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: <obfuscated>
})
test: Dataset({
features: ['input', 'output'],
num_rows: <obfuscated>
})
full: Dataset({
features: ['input', 'output'],
num_rows: <obfuscated>
})
})
```
### Expected behavior
The expected result is below:
```
DatasetDict({
train: Dataset({
features: ['input', 'output', 'tags'],
num_rows: <obfuscated>
})
test: Dataset({
features: ['input', 'output', 'tags'],
num_rows: <obfuscated>
})
full: Dataset({
features: ['input', 'output', 'tags'],
num_rows: <obfuscated>
})
})
```
My workaround is as follows:
```python
dsinfo = datasets.get_dataset_config_info(
path=data_files,
config_name=data_config,
token=hf_token,
)
allfeatures = dsinfo.features.copy()
if "tags" not in allfeatures:
allfeatures["tags"] = [{"key": Value(dtype="string", id=None), "value": Value(dtype="string", id=None)}]
dataset = datasets.load_dataset(
path=data_files,
name=data_config,
features=allfeatures,
token=hf_token,
)
```
Interestingly enough (and perhaps a related bug?), if I don't add the `tags` to `allfeatures` above (i.e. only loading `input` and `output`), it throws an error when executing `load_dataset`:
```
ValueError: Couldn't cast
tags: list<element: struct<key: string, value: string>>
child 0, element: struct<key: string, value: string>
child 0, key: string
child 1, value: string
input: <obfuscated>
output: <obfuscated>
-- schema metadata --
huggingface: '{"info": {"features": {"tags": [{"key": {"dtype": "string",' + 532
to
{'input': <obfuscated>, 'output': <obfuscated>
because column names don't match
```
Traceback for this:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/load.py", line 2152, in load_dataset
builder_instance.download_and_prepare(
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 948, in download_and_prepare
self._download_and_prepare(
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1043, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1805, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1950, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
- `datasets` version: 2.15.0
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6522/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6522/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6521/comments | https://api.github.com/repos/huggingface/datasets/issues/6521/events | https://github.com/huggingface/datasets/issues/6521 | 2,052,229,538 | I_kwDODunzps56Uomi | 6,521 | The order of the splits is not preserved | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"After investigation, I think the issue was introduced by the use of the Parquet export:\r\n- #6448\r\n\r\nI am proposing a fix.\r\n\r\nCC: @lhoestq "
] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | MEMBER | null | We had a regression and the order of the splits is not preserved. They are alphabetically sorted, instead of preserving original "train", "validation", "test" order.
Check: In branch "main"
```python
In [9]: dataset = load_dataset("adversarial_qa", '"adversarialQA")
In [10]: dataset
Out[10]:
DatasetDict({
test: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 3000
})
train: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 30000
})
validation: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 3000
})
})
```
Before (2.15.0) it was:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 30000
})
validation: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 3000
})
test: Dataset({
features: ['id', 'title', 'context', 'question', 'answers', 'metadata'],
num_rows: 3000
})
})
```
See issues:
- https://huggingface.co/datasets/adversarial_qa/discussions/3
- https://huggingface.co/datasets/beans/discussions/4
This is a regression because it was previously fixed. See:
- #6196
- #5728 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6521/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6521/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6517/comments | https://api.github.com/repos/huggingface/datasets/issues/6517/events | https://github.com/huggingface/datasets/issues/6517 | 2,050,121,588 | I_kwDODunzps56Ml90 | 6,517 | Bug get_metadata_patterns arg error | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | https://github.com/huggingface/datasets/blob/3f149204a2a5948287adcade5e90707aa5207a92/src/datasets/load.py#L1240C1-L1240C69
metadata_patterns = get_metadata_patterns(base_path, download_config=self.download_config) | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6517/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6517/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6515/comments | https://api.github.com/repos/huggingface/datasets/issues/6515/events | https://github.com/huggingface/datasets/issues/6515 | 2,049,724,251 | I_kwDODunzps56LE9b | 6,515 | Why call http_head() when fsspec_head() succeeds | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | https://github.com/huggingface/datasets/blob/a91582de288d98e94bcb5ab634ca1cfeeff544c5/src/datasets/utils/file_utils.py#L510C1-L523C14 | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6515/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6515/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6513/comments | https://api.github.com/repos/huggingface/datasets/issues/6513/events | https://github.com/huggingface/datasets/issues/6513 | 2,048,869,151 | I_kwDODunzps56H0Mf | 6,513 | Support huggingface-hub 0.20.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | MEMBER | null | CI to test the support of `huggingface-hub` 0.20.0: https://github.com/huggingface/datasets/compare/main...ci-test-huggingface-hub-v0.20.0.rc1
We need to merge:
- #6510
- #6512
- #6516 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6513/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6513/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6507/comments | https://api.github.com/repos/huggingface/datasets/issues/6507/events | https://github.com/huggingface/datasets/issues/6507 | 2,045,152,928 | I_kwDODunzps555o6g | 6,507 | where is glue_metric.py> @Frankie123421 what was the resolution to this? | {
"avatar_url": "https://avatars.githubusercontent.com/u/119146162?v=4",
"events_url": "https://api.github.com/users/Mcccccc1024/events{/privacy}",
"followers_url": "https://api.github.com/users/Mcccccc1024/followers",
"following_url": "https://api.github.com/users/Mcccccc1024/following{/other_user}",
"gists_url": "https://api.github.com/users/Mcccccc1024/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mcccccc1024",
"id": 119146162,
"login": "Mcccccc1024",
"node_id": "U_kgDOBxoGsg",
"organizations_url": "https://api.github.com/users/Mcccccc1024/orgs",
"received_events_url": "https://api.github.com/users/Mcccccc1024/received_events",
"repos_url": "https://api.github.com/users/Mcccccc1024/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mcccccc1024/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mcccccc1024/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mcccccc1024",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,702 | 1970-01-01T00:00:00.000001 | NONE | null | > @Frankie123421 what was the resolution to this?
use glue_metric.py instead of glue.py in load_metric
_Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6507/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6507/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6506/comments | https://api.github.com/repos/huggingface/datasets/issues/6506/events | https://github.com/huggingface/datasets/issues/6506 | 2,044,975,038 | I_kwDODunzps5549e- | 6,506 | Incorrect test set labels for RTE and CoLA datasets via load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/73316684?v=4",
"events_url": "https://api.github.com/users/emreonal11/events{/privacy}",
"followers_url": "https://api.github.com/users/emreonal11/followers",
"following_url": "https://api.github.com/users/emreonal11/following{/other_user}",
"gists_url": "https://api.github.com/users/emreonal11/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emreonal11",
"id": 73316684,
"login": "emreonal11",
"node_id": "MDQ6VXNlcjczMzE2Njg0",
"organizations_url": "https://api.github.com/users/emreonal11/orgs",
"received_events_url": "https://api.github.com/users/emreonal11/received_events",
"repos_url": "https://api.github.com/users/emreonal11/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emreonal11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emreonal11/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emreonal11",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"As this is a specific issue of the \"glue\" dataset, I have transferred it to the dataset Discussion page: https://huggingface.co/datasets/glue/discussions/15\r\n\r\nLet's continue the discussion there!"
] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1.
Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the test set for evaluation purposes?
### Steps to reproduce the bug
!pip install datasets
from datasets import load_dataset
rte_data = load_dataset('glue', 'rte')
cola_data = load_dataset('glue', 'cola')
print(rte_data['test'][0:30]['label'])
print(cola_data['test'][0:30]['label'])
Output:
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
The non-label test data seems to be fine:
e.g. rte_data['test'][1] is:
{'sentence1': "Authorities in Brazil say that more than 200 people are being held hostage in a prison in the country's remote, Amazonian-jungle state of Rondonia.",
'sentence2': 'Authorities in Brazil hold 200 people as hostage.',
'label': -1,
'idx': 1}
Training and validation data are also fine:
e.g. rte_data['train][0] is:
{'sentence1': 'No Weapons of Mass Destruction Found in Iraq Yet.',
'sentence2': 'Weapons of Mass Destruction Found in Iraq.',
'label': 1,
'idx': 0}
### Expected behavior
Expected the labels to be binary 0/1 values; Got all -1s instead
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6506/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6506/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6505/comments | https://api.github.com/repos/huggingface/datasets/issues/6505/events | https://github.com/huggingface/datasets/issues/6505 | 2,044,721,288 | I_kwDODunzps553_iI | 6,505 | Got stuck when I trying to load a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/18232551?v=4",
"events_url": "https://api.github.com/users/yirenpingsheng/events{/privacy}",
"followers_url": "https://api.github.com/users/yirenpingsheng/followers",
"following_url": "https://api.github.com/users/yirenpingsheng/following{/other_user}",
"gists_url": "https://api.github.com/users/yirenpingsheng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yirenpingsheng",
"id": 18232551,
"login": "yirenpingsheng",
"node_id": "MDQ6VXNlcjE4MjMyNTUx",
"organizations_url": "https://api.github.com/users/yirenpingsheng/orgs",
"received_events_url": "https://api.github.com/users/yirenpingsheng/received_events",
"repos_url": "https://api.github.com/users/yirenpingsheng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yirenpingsheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yirenpingsheng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yirenpingsheng",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I ran into the same problem when I used a server cluster (Slurm system managed) that couldn't load any of the huggingface datasets or models, but it worked on my laptop. I suspected some system configuration-related problem, but I had no idea. \r\nMy problems are consistent with [issue #2618](https://github.com/huggingface/datasets/issues/2618). All the huggingface-related libraries I use are the latest versions.\r\n\r\n",
"> I ran into the same problem when I used a server cluster (Slurm system managed) that couldn't load any of the huggingface datasets or models, but it worked on my laptop. I suspected some system configuration-related problem, but I had no idea. My problems are consistent with [issue #2618](https://github.com/huggingface/datasets/issues/2618). All the huggingface-related libraries I use are the latest versions.\r\n\r\nhave you solved this issue yet? i met the same problem on server but everything works on laptop. I think maybe the filelock repo is contradictory with file system.",
"I am having the same issue on a computing cluster but this works on my laptop as well. I instead have this error:\r\n`/home/.conda/envs/py10/lib/python3.10/site-packages/filelock/_unix.py\", line 43, in _acquire\r\n fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\r\nOSError: [Errno 5] Input/output error`\r\n\r\nthe load_dataset command does not work on server for local or hosted hugging-face datasets, and I have tried for several files",
"Same here. Is there any solution?",
"In my case, `.cahce` was in a shared folder. Moving it into the user's home folder fixed the problem. #2618 for more details",
"> In my case, `.cahce` was in a shared folder. Moving it into the user's home folder fixed the problem. #2618 for more details在我的情况下, `.cahce` 在一个共享文件夹中。将其移动到用户的主文件夹中解决了问题。 #2618 获取更多详细信息。\r\n\r\nCan you be more specific? thank."
] | 1970-01-01T00:00:00.000001 | 1,715 | null | NONE | null | ### Describe the bug
Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records.
Here is my code:
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json')
I waited it for 20 minutes. It still no response. I cannot using Ctrl+C to cancel the command. I have to use Ctrl+Z to kill it. I also try it with a txt file, it still no response in a long time.
I can load the same file successfully using my laptop (windows 10, python 3.8.5, datasets==2.14.5). I can also make it on another computer (Ubuntu 20.04.5 LTS, python 3.10.13, datasets 2.14.7). It only takes me 1-2 miniutes.
Could you give me some suggestions? Thank you.
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json')
### Expected behavior
I hope it can load the file successfully.
### Environment info
OS: Debian GNU/Linux 10
Python: Python 3.10.13
Pip list:
Package Version
------------------------- ------------
accelerate 0.25.0
addict 2.4.0
aiofiles 23.2.1
aiohttp 3.9.1
aiosignal 1.3.1
aliyun-python-sdk-core 2.14.0
aliyun-python-sdk-kms 2.16.2
altair 5.2.0
annotated-types 0.6.0
anyio 3.7.1
async-timeout 4.0.3
attrs 23.1.0
certifi 2023.11.17
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
contourpy 1.2.0
crcmod 1.7
cryptography 41.0.7
cycler 0.12.1
datasets 2.14.7
dill 0.3.7
docstring-parser 0.15
einops 0.7.0
exceptiongroup 1.2.0
fastapi 0.105.0
ffmpy 0.3.1
filelock 3.13.1
fonttools 4.46.0
frozenlist 1.4.1
fsspec 2023.10.0
gast 0.5.4
gradio 3.50.2
gradio_client 0.6.1
h11 0.14.0
httpcore 1.0.2
httpx 0.25.2
huggingface-hub 0.19.4
idna 3.6
importlib-metadata 7.0.0
importlib-resources 6.1.1
jieba 0.42.1
Jinja2 3.1.2
jmespath 0.10.0
joblib 1.3.2
jsonschema 4.20.0
jsonschema-specifications 2023.11.2
kiwisolver 1.4.5
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.8.2
mdurl 0.1.2
modelscope 1.10.0
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.15
networkx 3.2.1
nltk 3.8.1
numpy 1.26.2
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.18.1
nvidia-nvjitlink-cu12 12.3.101
nvidia-nvtx-cu12 12.1.105
orjson 3.9.10
oss2 2.18.3
packaging 23.2
pandas 2.1.4
peft 0.7.1
Pillow 10.1.0
pip 23.3.1
platformdirs 4.1.0
protobuf 4.25.1
psutil 5.9.6
pyarrow 14.0.1
pyarrow-hotfix 0.6
pycparser 2.21
pycryptodome 3.19.0
pydantic 2.5.2
pydantic_core 2.14.5
pydub 0.25.1
Pygments 2.17.2
pyparsing 3.1.1
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3.post1
PyYAML 6.0.1
referencing 0.32.0
regex 2023.10.3
requests 2.31.0
rich 13.7.0
rouge-chinese 1.0.3
rpds-py 0.13.2
safetensors 0.4.1
scipy 1.11.4
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 68.2.2
shtab 1.6.5
simplejson 3.19.2
six 1.16.0
sniffio 1.3.0
sortedcontainers 2.4.0
sse-starlette 1.8.2
starlette 0.27.0
sympy 1.12
tiktoken 0.5.2
tokenizers 0.15.0
tomli 2.0.1
toolz 0.12.0
torch 2.1.2
tqdm 4.66.1
transformers 4.36.1
triton 2.1.0
trl 0.7.4
typing_extensions 4.9.0
tyro 0.6.0
tzdata 2023.3
urllib3 2.1.0
uvicorn 0.24.0.post1
websockets 11.0.3
wheel 0.41.2
xxhash 3.4.1
yapf 0.40.2
yarl 1.9.4
zipp 3.17.0
| null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6505/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6504/comments | https://api.github.com/repos/huggingface/datasets/issues/6504/events | https://github.com/huggingface/datasets/issues/6504 | 2,044,541,154 | I_kwDODunzps553Tji | 6,504 | Error Pushing to Hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/55055083?v=4",
"events_url": "https://api.github.com/users/Jiayi-Pan/events{/privacy}",
"followers_url": "https://api.github.com/users/Jiayi-Pan/followers",
"following_url": "https://api.github.com/users/Jiayi-Pan/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiayi-Pan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jiayi-Pan",
"id": 55055083,
"login": "Jiayi-Pan",
"node_id": "MDQ6VXNlcjU1MDU1MDgz",
"organizations_url": "https://api.github.com/users/Jiayi-Pan/orgs",
"received_events_url": "https://api.github.com/users/Jiayi-Pan/received_events",
"repos_url": "https://api.github.com/users/Jiayi-Pan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jiayi-Pan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiayi-Pan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jiayi-Pan",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,702 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Error when trying to push a dataset in a special format to hub
### Steps to reproduce the bug
```
import datasets
from datasets import Dataset
dataset_dict = {
"filename": ["apple", "banana"],
"token": [[[1,2],[3,4]],[[1,2],[3,4]]],
"label": [0, 1],
}
dataset = Dataset.from_dict(dataset_dict)
dataset = dataset.cast_column("token", datasets.features.features.Array2D(shape=(2, 2),dtype="int16"))
dataset.push_to_hub("SequenceModel/imagenet_val_256")
```
Error:
```
...
ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple'
in "<unicode string>", line 8, column 16:
shape: !!python/tuple
^
```
### Expected behavior
Dataset being pushed to hub
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/55055083?v=4",
"events_url": "https://api.github.com/users/Jiayi-Pan/events{/privacy}",
"followers_url": "https://api.github.com/users/Jiayi-Pan/followers",
"following_url": "https://api.github.com/users/Jiayi-Pan/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiayi-Pan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jiayi-Pan",
"id": 55055083,
"login": "Jiayi-Pan",
"node_id": "MDQ6VXNlcjU1MDU1MDgz",
"organizations_url": "https://api.github.com/users/Jiayi-Pan/orgs",
"received_events_url": "https://api.github.com/users/Jiayi-Pan/received_events",
"repos_url": "https://api.github.com/users/Jiayi-Pan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jiayi-Pan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiayi-Pan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jiayi-Pan",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6504/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6504/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6501/comments | https://api.github.com/repos/huggingface/datasets/issues/6501/events | https://github.com/huggingface/datasets/issues/6501 | 2,043,377,240 | I_kwDODunzps55y3ZY | 6,501 | OverflowError: value too large to convert to int32_t | {
"avatar_url": "https://avatars.githubusercontent.com/u/47747764?v=4",
"events_url": "https://api.github.com/users/zhangfan-algo/events{/privacy}",
"followers_url": "https://api.github.com/users/zhangfan-algo/followers",
"following_url": "https://api.github.com/users/zhangfan-algo/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangfan-algo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhangfan-algo",
"id": 47747764,
"login": "zhangfan-algo",
"node_id": "MDQ6VXNlcjQ3NzQ3NzY0",
"organizations_url": "https://api.github.com/users/zhangfan-algo/orgs",
"received_events_url": "https://api.github.com/users/zhangfan-algo/received_events",
"repos_url": "https://api.github.com/users/zhangfan-algo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhangfan-algo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangfan-algo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhangfan-algo",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,702 | null | NONE | null | ### Describe the bug

### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl
pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl
done | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6501/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6501/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6497/comments | https://api.github.com/repos/huggingface/datasets/issues/6497/events | https://github.com/huggingface/datasets/issues/6497 | 2,041,994,274 | I_kwDODunzps55tlwi | 6,497 | Support setting a default config name in push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,702 | 1970-01-01T00:00:00.000001 | MEMBER | null | In order to convert script-datasets to no-script datasets, we need to support setting a default config name for those scripts that set one. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6497/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6497/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6496/comments | https://api.github.com/repos/huggingface/datasets/issues/6496/events | https://github.com/huggingface/datasets/issues/6496 | 2,041,589,386 | I_kwDODunzps55sC6K | 6,496 | Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again. | {
"avatar_url": "https://avatars.githubusercontent.com/u/35808396?v=4",
"events_url": "https://api.github.com/users/GeorgesLorre/events{/privacy}",
"followers_url": "https://api.github.com/users/GeorgesLorre/followers",
"following_url": "https://api.github.com/users/GeorgesLorre/following{/other_user}",
"gists_url": "https://api.github.com/users/GeorgesLorre/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GeorgesLorre",
"id": 35808396,
"login": "GeorgesLorre",
"node_id": "MDQ6VXNlcjM1ODA4Mzk2",
"organizations_url": "https://api.github.com/users/GeorgesLorre/orgs",
"received_events_url": "https://api.github.com/users/GeorgesLorre/received_events",
"repos_url": "https://api.github.com/users/GeorgesLorre/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GeorgesLorre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeorgesLorre/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GeorgesLorre",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I transferred from datasets-server, since the issue is more about `datasets` and the integration with `huggingface_hub`."
] | 1970-01-01T00:00:00.000001 | 1,702 | null | NONE | null | **Describe the bug**
Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub.
```
huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc)
A commit has happened since. Please refresh and try again.
```
**Steps to reproduce the bug**
This is a minimal reproducer:
```
import dask.dataframe as dd
import pandas as pd
import random
import os
import huggingface_hub
import datasets
huggingface_hub.login(token=os.getenv("HF_TOKEN"))
data = {"number": [random.randint(0,10) for _ in range(1000)]}
df = pd.DataFrame.from_dict(data)
dataframe = dd.from_pandas(df, npartitions=1)
dataframe = dataframe.repartition(npartitions=3)
schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema
repo_id = "GLorr/test-dask"
repo_path = f"hf://datasets/{repo_id}"
huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True)
dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema)
```
**Expected behavior**
Would expect to write to the hub without any problem.
**Environment info**
```
datasets==2.15.0
huggingface-hub==0.19.4
```
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6496/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6496/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6494/comments | https://api.github.com/repos/huggingface/datasets/issues/6494/events | https://github.com/huggingface/datasets/issues/6494 | 2,039,684,839 | I_kwDODunzps55kx7n | 6,494 | Image Data loaded Twice | {
"avatar_url": "https://avatars.githubusercontent.com/u/28867010?v=4",
"events_url": "https://api.github.com/users/ArcaneLex/events{/privacy}",
"followers_url": "https://api.github.com/users/ArcaneLex/followers",
"following_url": "https://api.github.com/users/ArcaneLex/following{/other_user}",
"gists_url": "https://api.github.com/users/ArcaneLex/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArcaneLex",
"id": 28867010,
"login": "ArcaneLex",
"node_id": "MDQ6VXNlcjI4ODY3MDEw",
"organizations_url": "https://api.github.com/users/ArcaneLex/orgs",
"received_events_url": "https://api.github.com/users/ArcaneLex/received_events",
"repos_url": "https://api.github.com/users/ArcaneLex/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArcaneLex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArcaneLex/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArcaneLex",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,702 | null | NONE | null | ### Describe the bug

When I learn from https://huggingface.co/docs/datasets/image_load and try to load image data from a folder. I noticed that the image was read twice in the returned data. As you can see in the attached image, there are only four images in the train folder, but reading brings up eight images
### Steps to reproduce the bug
from datasets import Dataset, load_dataset
dataset = load_dataset("imagefolder", data_dir="data/", drop_labels=False)
# print(dataset["train"][0]["image"] == dataset["train"][1]["image"])
print(dataset)
print(dataset["train"]["image"])
print(len(dataset["train"]["image"]))
### Expected behavior
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 8
})
})
[<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D1CA8B0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D2452E0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245310>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2453A0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245460>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245430>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2454F0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245550>]
8
### Environment info
- `datasets` version: 2.14.5
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.17
- Huggingface_hub version: 0.19.4
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6494/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6494/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6495/comments | https://api.github.com/repos/huggingface/datasets/issues/6495/events | https://github.com/huggingface/datasets/issues/6495 | 2,039,708,529 | I_kwDODunzps55k3tx | 6,495 | Newline characters don't behave as expected when calling dataset.info | {
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gerald-wrona",
"id": 32300890,
"login": "gerald-wrona",
"node_id": "MDQ6VXNlcjMyMzAwODkw",
"organizations_url": "https://api.github.com/users/gerald-wrona/orgs",
"received_events_url": "https://api.github.com/users/gerald-wrona/received_events",
"repos_url": "https://api.github.com/users/gerald-wrona/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gerald-wrona",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,702 | null | NONE | null | ### System Info
- `transformers` version: 4.32.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cpu (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@marios
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[Source](https://huggingface.co/docs/datasets/v2.2.1/en/access)
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(names=['not_equivalent', 'equivalent'], id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='glue', dataset_name=None, config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943843, num_examples=3668, shard_lengths=None, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105879, num_examples=408, shard_lengths=None, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442410, num_examples=1725, shard_lengths=None, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': None}}, download_size=1494541, post_processing_size=None, dataset_size=1492132, size_in_bytes=2986673)
### Expected behavior
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(
description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n',
citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398',
license='',
features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, builder_name='glue', config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943851, num_examples=3668, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105887, num_examples=408, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442418, num_examples=1725, dataset_name='glue')},
download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': '971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': '60a9b09084528f0673eedee2b69cb941920f0b8cd0eeccefc464a98768457f89'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': 'a04e271090879aaba6423d65b94950c089298587d9c084bf9cd7439bd785f784'}},
download_size=1494541,
post_processing_size=None,
dataset_size=1492156,
size_in_bytes=2986697
) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6495/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6490/comments | https://api.github.com/repos/huggingface/datasets/issues/6490/events | https://github.com/huggingface/datasets/issues/6490 | 2,037,204,892 | I_kwDODunzps55bUec | 6,490 | `load_dataset(...,save_infos=True)` not working without loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/114978051?v=4",
"events_url": "https://api.github.com/users/morganveyret/events{/privacy}",
"followers_url": "https://api.github.com/users/morganveyret/followers",
"following_url": "https://api.github.com/users/morganveyret/following{/other_user}",
"gists_url": "https://api.github.com/users/morganveyret/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/morganveyret",
"id": 114978051,
"login": "morganveyret",
"node_id": "U_kgDOBtptAw",
"organizations_url": "https://api.github.com/users/morganveyret/orgs",
"received_events_url": "https://api.github.com/users/morganveyret/received_events",
"repos_url": "https://api.github.com/users/morganveyret/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/morganveyret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganveyret/subscriptions",
"type": "User",
"url": "https://api.github.com/users/morganveyret",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Also, once the README.md exists in the python environment it is used when loading another dataset in the same format (e.g. json) since it always resolves the path to the same directory.\r\nThe consequence here is any other dataset won't load because of infos mismatch.\r\nTo reproduce this aspect:\r\n1. Do a `load_datasets(...,save_infos=True)` with one dataset without a loading script\r\n2. Try to load another dataset without a loading script in the same format (e.g. json) but with a different schema "
] | 1970-01-01T00:00:00.000001 | 1,702 | null | NONE | null | ### Describe the bug
It seems that saving a dataset infos back into the card file is not working for datasets without a loading script.
After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory.
Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`).
### Steps to reproduce the bug
1. Have a local dataset without any loading script
2. Make sure there are no dataset infos in the README.md
3. Load with `save_infos=True`
4. No change in the dataset README.md
5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`)
### Expected behavior
The dataset README.md should be updated and no file should be created in the python environment.
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.6.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6490/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6490/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6489/comments | https://api.github.com/repos/huggingface/datasets/issues/6489/events | https://github.com/huggingface/datasets/issues/6489 | 2,036,743,777 | I_kwDODunzps55Zj5h | 6,489 | load_dataset imageflder for aws s3 path | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353106?v=4",
"events_url": "https://api.github.com/users/segalinc/events{/privacy}",
"followers_url": "https://api.github.com/users/segalinc/followers",
"following_url": "https://api.github.com/users/segalinc/following{/other_user}",
"gists_url": "https://api.github.com/users/segalinc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/segalinc",
"id": 9353106,
"login": "segalinc",
"node_id": "MDQ6VXNlcjkzNTMxMDY=",
"organizations_url": "https://api.github.com/users/segalinc/orgs",
"received_events_url": "https://api.github.com/users/segalinc/received_events",
"repos_url": "https://api.github.com/users/segalinc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/segalinc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/segalinc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/segalinc",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,702 | null | NONE | null | ### Feature request
I would like to load a dataset from S3 using the imagefolder option
something like
`dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) `
### Motivation
no need of data_files
### Your contribution
no experience with this | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6489/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6489/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6488/comments | https://api.github.com/repos/huggingface/datasets/issues/6488/events | https://github.com/huggingface/datasets/issues/6488 | 2,035,899,898 | I_kwDODunzps55WV36 | 6,488 | 429 Client Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/7882383?v=4",
"events_url": "https://api.github.com/users/sasaadi/events{/privacy}",
"followers_url": "https://api.github.com/users/sasaadi/followers",
"following_url": "https://api.github.com/users/sasaadi/following{/other_user}",
"gists_url": "https://api.github.com/users/sasaadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sasaadi",
"id": 7882383,
"login": "sasaadi",
"node_id": "MDQ6VXNlcjc4ODIzODM=",
"organizations_url": "https://api.github.com/users/sasaadi/orgs",
"received_events_url": "https://api.github.com/users/sasaadi/received_events",
"repos_url": "https://api.github.com/users/sasaadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sasaadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sasaadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sasaadi",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Transferring repos as this is a datasets issue ",
"I'm getting a similar issue even though I've already downloaded the dataset 😅 \r\n\r\n```\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/HuggingFaceM4/WebSight\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,718 | null | NONE | null | Hello, I was downloading the following dataset and after 20% of data was downloaded, I started getting error 429. It is not resolved since a few days. How should I resolve it?
Thanks
Dataset:
https://huggingface.co/datasets/cerebras/SlimPajama-627B
Error:
`requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/datasets/cerebras/SlimPajama-627B/resolve/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk1/example_train_3300.jsonl.zst`
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6488/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6488/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6485/comments | https://api.github.com/repos/huggingface/datasets/issues/6485/events | https://github.com/huggingface/datasets/issues/6485 | 2,035,141,884 | I_kwDODunzps55Tcz8 | 6,485 | FileNotFoundError: [Errno 2] No such file or directory: 'nul' | {
"avatar_url": "https://avatars.githubusercontent.com/u/73683903?v=4",
"events_url": "https://api.github.com/users/amanyara/events{/privacy}",
"followers_url": "https://api.github.com/users/amanyara/followers",
"following_url": "https://api.github.com/users/amanyara/following{/other_user}",
"gists_url": "https://api.github.com/users/amanyara/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amanyara",
"id": 73683903,
"login": "amanyara",
"node_id": "MDQ6VXNlcjczNjgzOTAz",
"organizations_url": "https://api.github.com/users/amanyara/orgs",
"received_events_url": "https://api.github.com/users/amanyara/received_events",
"repos_url": "https://api.github.com/users/amanyara/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amanyara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanyara/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amanyara",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! It seems like the problem is your environment. Maybe this issue can help: https://github.com/pytest-dev/pytest/issues/9519. "
] | 1970-01-01T00:00:00.000001 | 1,702 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets"
i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul'


### Steps to reproduce the bug
1.import datasets
### Expected behavior
i just run a single line code and stuct in this bug
### Environment info
OS: Windows10
Datasets==2.15.0
python=3.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/73683903?v=4",
"events_url": "https://api.github.com/users/amanyara/events{/privacy}",
"followers_url": "https://api.github.com/users/amanyara/followers",
"following_url": "https://api.github.com/users/amanyara/following{/other_user}",
"gists_url": "https://api.github.com/users/amanyara/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amanyara",
"id": 73683903,
"login": "amanyara",
"node_id": "MDQ6VXNlcjczNjgzOTAz",
"organizations_url": "https://api.github.com/users/amanyara/orgs",
"received_events_url": "https://api.github.com/users/amanyara/received_events",
"repos_url": "https://api.github.com/users/amanyara/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amanyara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanyara/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amanyara",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6485/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6485/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6483/comments | https://api.github.com/repos/huggingface/datasets/issues/6483/events | https://github.com/huggingface/datasets/issues/6483 | 2,032,946,981 | I_kwDODunzps55LE8l | 6,483 | Iterable Dataset: rename column clashes with remove column | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi",
"user_view_type": "public"
} | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | [] | null | [
"Column \"text\" doesn't exist anymore so you can't remove it",
"You can get the expected result by fixing typos in the snippet :)\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset.features) - COLUMNS_TO_KEEP)\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Fixed code:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\ndataset_features = dataset.features.keys()\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Whoops 😅 Thanks for the swift reply both! Works like a charm!"
] | 1970-01-01T00:00:00.000001 | 1,702 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
Suppose I have a two iterable datasets, one with the features:
* `{"audio", "text", "column_a"}`
And the other with the features:
* `{"audio", "sentence", "column_b"}`
I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typically do this by:
1. Renaming the common columns to the same name (e.g. `"text"` -> `"sentence"`)
2. Removing the unwanted columns (e.g. `"column_a"`, `"column_b"`)
However, the process of renaming and removing columns in an iterable dataset doesn't work, since we need to preserve the original text column, meaning we can't combine the datasets.
### Steps to reproduce the bug
```python
from datasets import load_dataset
# load LS in streaming mode
dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
# check original features
dataset_features = dataset.features.keys()
print("Original features: ", dataset_features)
# rename "text" -> "sentence"
dataset = dataset.rename_column("text", "sentence")
# remove unwanted columns
COLUMNS_TO_KEEP = {"audio", "sentence"}
dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
# stream first sample, should return "audio" and "sentence" columns
print(next(iter(dataset)))
```
Traceback:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[5], line 17
14 COLUMNS_TO_KEEP = {"audio", "sentence"}
15 dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
---> 17 print(next(iter(dataset)))
File ~/datasets/src/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self)
1350 yield formatter.format_row(pa_table)
1351 return
-> 1353 for key, example in ex_iterable:
1354 if self.features:
1355 # `IterableDataset` automatically fills missing columns with None.
1356 # This is done with `_apply_feature_types_on_example`.
1357 example = _apply_feature_types_on_example(
1358 example, self.features, token_per_repo_id=self._token_per_repo_id
1359 )
File ~/datasets/src/datasets/iterable_dataset.py:652, in MappedExamplesIterable.__iter__(self)
650 yield from ArrowExamplesIterable(self._iter_arrow, {})
651 else:
--> 652 yield from self._iter()
File ~/datasets/src/datasets/iterable_dataset.py:729, in MappedExamplesIterable._iter(self)
727 if self.remove_columns:
728 for c in self.remove_columns:
--> 729 del transformed_example[c]
730 yield key, transformed_example
731 current_idx += 1
KeyError: 'text'
```
=> we see that `datasets` is looking for the column "text", even though we've renamed this to "sentence" and then removed the un-wanted "text" column from our dataset.
### Expected behavior
Should be able to rename and remove columns from iterable dataset.
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: macOS-13.5.1-arm64-arm-64bit
- Python version: 3.11.6
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
- `fsspec` version: 2023.9.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6483/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6483/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6484/comments | https://api.github.com/repos/huggingface/datasets/issues/6484/events | https://github.com/huggingface/datasets/issues/6484 | 2,033,333,294 | I_kwDODunzps55MjQu | 6,484 | [Feature Request] Dataset versioning | {
"avatar_url": "https://avatars.githubusercontent.com/u/47979198?v=4",
"events_url": "https://api.github.com/users/kenfus/events{/privacy}",
"followers_url": "https://api.github.com/users/kenfus/followers",
"following_url": "https://api.github.com/users/kenfus/following{/other_user}",
"gists_url": "https://api.github.com/users/kenfus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kenfus",
"id": 47979198,
"login": "kenfus",
"node_id": "MDQ6VXNlcjQ3OTc5MTk4",
"organizations_url": "https://api.github.com/users/kenfus/orgs",
"received_events_url": "https://api.github.com/users/kenfus/received_events",
"repos_url": "https://api.github.com/users/kenfus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kenfus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenfus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kenfus",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hello @kenfus, this is meant to be possible to do yes. Let me ping @lhoestq or @mariosasko from the `datasets` team (`huggingface_hub` is only the underlying library to download files from the Hub but here it looks more like a `datasets` problem). ",
"Hi! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | 1970-01-01T00:00:00.000001 | 1,702 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different.
Of course, I may have done something wrong or missed a setting somewhere!
**Describe the solution you'd like**
The solution would allow me to easily work with revisions:
- create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this:
`dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')`
- then, get the current revision as follows:
```
dataset = load_dataset(
'kenfus/xy', revision='v1.0.2',
)
```
this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision.
- if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision.
**Describe alternatives you've considered**
I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere.
**Additional context**
Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface.
This is the data loading in my script:
```
## CREATE PATHS
prepared_dataset_path = os.path.join(
DATA_FOLDER, str(DATA_VERSION), "prepared_dataset"
)
os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True)
## LOAD DATASET
if os.path.exists(prepared_dataset_path):
print("Loading prepared dataset from disk...")
dataset_prepared = load_from_disk(prepared_dataset_path)
else:
print("Loading dataset from HuggingFace Datasets...")
dataset = load_dataset(
PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload"
)
print("Preparing dataset...")
dataset_prepared = dataset.map(
prepare_dataset,
remove_columns=["audio", "transcription"],
num_proc=os.cpu_count(),
load_from_cache_file=False,
)
dataset_prepared.save_to_disk(prepared_dataset_path)
del dataset
if CHECK_DATASET:
## CHECK DATASET
dataset_prepared = dataset_prepared.map(
check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False
)
dataset_filtered = dataset_prepared.filter(
lambda example: not example["incorrect_dimension"],
load_from_cache_file=False,
)
for example in dataset_prepared.filter(
lambda example: example["incorrect_dimension"], load_from_cache_file=False
):
print(example["path"])
print(
f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}"
)
print("Number of examples train: ", len(dataset_filtered["train"]))
print("Number of examples test: ", len(dataset_filtered["test"]))
```
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6484/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6481/comments | https://api.github.com/repos/huggingface/datasets/issues/6481/events | https://github.com/huggingface/datasets/issues/6481 | 2,032,650,003 | I_kwDODunzps55J8cT | 6,481 | using torchrun, save_to_disk suddenly shows SIGTERM | {
"avatar_url": "https://avatars.githubusercontent.com/u/85916625?v=4",
"events_url": "https://api.github.com/users/Ariya12138/events{/privacy}",
"followers_url": "https://api.github.com/users/Ariya12138/followers",
"following_url": "https://api.github.com/users/Ariya12138/following{/other_user}",
"gists_url": "https://api.github.com/users/Ariya12138/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ariya12138",
"id": 85916625,
"login": "Ariya12138",
"node_id": "MDQ6VXNlcjg1OTE2NjI1",
"organizations_url": "https://api.github.com/users/Ariya12138/orgs",
"received_events_url": "https://api.github.com/users/Ariya12138/received_events",
"repos_url": "https://api.github.com/users/Ariya12138/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ariya12138/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ariya12138/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ariya12138",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,702 | null | NONE | null | ### Describe the bug
When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages:
Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard.
WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM
ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967.
### Steps to reproduce the bug
ds_shard = ds_shard.map(map_fn, *args, **kwargs)
ds_shard.save_to_disk(ds_shard_filepaths[rank])
Saving the dataset (14/70 shards): 20%|██ | 875350/4376702 [00:19<01:53, 30863.15 examples/s]
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python
Traceback (most recent call last):
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
==========================================================
run.py FAILED
----------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
----------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-12-08_20:09:04
rank : 0 (local_rank: 0)
exitcode : -7 (pid: 2224967)
error_file: <N/A>
traceback : Signal 7 (SIGBUS) received by PID 2224967
### Expected behavior
I hope it can save successfully without any issues, but it seems there is a problem.
### Environment info
`datasets` version: 2.14.6
- Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 14.0.0
- Pandas version: 2.1.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6481/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6481/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6478/comments | https://api.github.com/repos/huggingface/datasets/issues/6478/events | https://github.com/huggingface/datasets/issues/6478 | 2,028,071,596 | I_kwDODunzps544eqs | 6,478 | How to load data from lakefs | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"You can create a `pandas` DataFrame following [this](https://lakefs.io/data-version-control/dvc-using-python/) tutorial, and then convert this DataFrame to a `Dataset` with `datasets.Dataset.from_pandas`. For larger datasets (to memory map them), you can use `Dataset.from_generator` with a generator function that reads lakeFS files with `s3fs`.",
"@mariosasko hello,\r\nThis can achieve and https://huggingface.co/datasets Does the same effect apply to the dataset? For example, downloading while using",
"There is a blogspot from lakes on this topic: https://lakefs.io/blog/data-version-control-hugging-face-datasets/"
] | 1970-01-01T00:00:00.000001 | 1,720 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
| {
"avatar_url": "https://avatars.githubusercontent.com/u/9143109?v=4",
"events_url": "https://api.github.com/users/andimarafioti/events{/privacy}",
"followers_url": "https://api.github.com/users/andimarafioti/followers",
"following_url": "https://api.github.com/users/andimarafioti/following{/other_user}",
"gists_url": "https://api.github.com/users/andimarafioti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andimarafioti",
"id": 9143109,
"login": "andimarafioti",
"node_id": "MDQ6VXNlcjkxNDMxMDk=",
"organizations_url": "https://api.github.com/users/andimarafioti/orgs",
"received_events_url": "https://api.github.com/users/andimarafioti/received_events",
"repos_url": "https://api.github.com/users/andimarafioti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andimarafioti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andimarafioti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andimarafioti",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6478/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6478/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6476/comments | https://api.github.com/repos/huggingface/datasets/issues/6476/events | https://github.com/huggingface/datasets/issues/6476 | 2,028,018,596 | I_kwDODunzps544Ruk | 6,476 | CI on windows is broken: PermissionError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394
```
FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\\default\\0.0.0\\c240e2be3370bdbd\\dataset_with_script-train.arrow'
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6476/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6476/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6475/comments | https://api.github.com/repos/huggingface/datasets/issues/6475/events | https://github.com/huggingface/datasets/issues/6475 | 2,027,373,734 | I_kwDODunzps5410Sm | 6,475 | laion2B-en failed to load on Windows with PrefetchVirtualMemory failed | {
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/doctorpangloss",
"id": 2229300,
"login": "doctorpangloss",
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"type": "User",
"url": "https://api.github.com/users/doctorpangloss",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"~~You will see this error if the cache dir filepath contains relative `..` paths. Use `os.path.realpath(_CACHE_DIR)` before passing it to the `load_dataset` function.~~",
"This is a real issue and not related to paths.",
"Based on the StackOverflow answer, this causes the error to go away:\r\n```diff\r\ndiff --git a/table.py b/table.py\r\n--- a/table.py\t\r\n+++ b/table.py\t(date 1701824849806)\r\n@@ -47,7 +47,7 @@\r\n \r\n \r\n def _memory_mapped_record_batch_reader_from_file(filename: str) -> pa.RecordBatchStreamReader:\r\n- memory_mapped_stream = pa.memory_map(filename)\r\n+ memory_mapped_stream = pa.memory_map(filename, \"r+\")\r\n return pa.ipc.open_stream(memory_mapped_stream)\r\n```\r\nBut now loading the dataset goes very, very slowly, which is unexpected.",
"I don't really comprehend what it is that `datasets` gave me when it downloaded the laion2B-en dataset, because nothing can seemingly read these 1024 .arrow files it is retrieving. Not `polars`, not `pyarrow`, it's not an `ipc` file, it's not a `parquet` file...",
"Hi! \r\n\r\nInstead of generating one (potentially large) Arrow file, we shard the generated data into 500 MB shards because memory-mapping large Arrow files can be problematic on some systems. Maybe deleting the dataset's cache and increasing the shard size (controlled with the `datasets.config.MAX_SHARD_SIZE` variable; e.g. to \"4GB\") can fix the issue for you.\r\n\r\n> I don't really comprehend what it is that `datasets` gave me when it downloaded the laion2B-en dataset, because nothing can seemingly read these 1024 .arrow files it is retrieving. Not `polars`, not `pyarrow`, it's not an `ipc` file, it's not a `parquet` file...\r\n\r\nOur `.arrow` files are in the [Arrow streaming format](https://arrow.apache.org/docs/python/ipc.html#using-streams). To load them as a `polars` DataFrame, do the following:\r\n```python\r\ndf = pl.from_arrow(Dataset.from_from(path_to_arrow_file).data.table)\r\n```\r\n\r\nWe plan to switch to the IPC version eventually.\r\n",
"Hmm, I have a feeling this works fine on Linux, and is a real bug for however `datasets` is doing the sharding on Windows. I will follow up, but I think this is a real bug."
] | 1970-01-01T00:00:00.000001 | 1,701 | null | NONE | null | ### Describe the bug
I have downloaded laion2B-en, and I'm receiving the following error trying to load it:
```
Resolving data files: 100%|██████████| 128/128 [00:00<00:00, 1173.79it/s]
Traceback (most recent call last):
File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 31, in <module>
count = compute_frequencies()
^^^^^^^^^^^^^^^^^^^^^
File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 17, in compute_frequencies
laion2b_dataset = load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\load.py", line 2165, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1187, in as_dataset
datasets = map_nested(
^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\utils\py_utils.py", line 456, in map_nested
return function(data_struct)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1217, in _build_single_dataset
ds = self._as_dataset(
^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1291, in _as_dataset
dataset_kwargs = ArrowReader(cache_dir, self.info).read(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 244, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 265, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 200, in _read_files
pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 336, in _get_table_from_filename
table = ArrowReader.read_table(filename, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 357, in read_table
return table_cls.from_file(filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 1059, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 66, in _memory_mapped_arrow_table_from_file
pa_table = opened_stream.read_all()
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow\ipc.pxi", line 757, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
OSError: [WinError 8] PrefetchVirtualMemory failed. Detail: [Windows error 8] Not enough memory resources are available to process this command.
```
This error is probably a red herring: https://stackoverflow.com/questions/50263929/numpy-memmap-returns-not-enough-memory-while-there-are-plenty-available In other words, the issue is related to asking for a memory mapping of length N > M the length of the file on Windows. This gracefully succeeds on Linux.
I have 1024 arrow files in my cache instead of 128 like in the repository for it. Probably related. I don't know why `datasets` reorganized/rewrote the dataset in my cache to be 1024 slices instead of the original 128.
### Steps to reproduce the bug
```
# as a huggingface developer, you may already have laion2B-en somewhere
_CACHE_DIR = "."
from datasets import load_dataset
load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False)
```
### Expected behavior
This should correctly load as a memory mapped Arrow dataset.
### Environment info
- `datasets` version: 2.15.0
- Platform: Windows-10-10.0.20348-SP0 (this is windows 2022)
- Python version: 3.11.4
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
- `fsspec` version: 2023.10.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/doctorpangloss",
"id": 2229300,
"login": "doctorpangloss",
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"type": "User",
"url": "https://api.github.com/users/doctorpangloss",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6475/timeline | null | reopened | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6472/comments | https://api.github.com/repos/huggingface/datasets/issues/6472/events | https://github.com/huggingface/datasets/issues/6472 | 2,026,493,439 | I_kwDODunzps54ydX_ | 6,472 | CI quality is broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359
```
Would reformat: src/datasets/features/image.py
1 file would be reformatted, 253 files left unchanged
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6472/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6472/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6470/comments | https://api.github.com/repos/huggingface/datasets/issues/6470/events | https://github.com/huggingface/datasets/issues/6470 | 2,024,724,319 | I_kwDODunzps54rtdf | 6,470 | If an image in a dataset is corrupted, we get unescapable error | {
"avatar_url": "https://avatars.githubusercontent.com/u/14337872?v=4",
"events_url": "https://api.github.com/users/chigozienri/events{/privacy}",
"followers_url": "https://api.github.com/users/chigozienri/followers",
"following_url": "https://api.github.com/users/chigozienri/following{/other_user}",
"gists_url": "https://api.github.com/users/chigozienri/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chigozienri",
"id": 14337872,
"login": "chigozienri",
"node_id": "MDQ6VXNlcjE0MzM3ODcy",
"organizations_url": "https://api.github.com/users/chigozienri/orgs",
"received_events_url": "https://api.github.com/users/chigozienri/received_events",
"repos_url": "https://api.github.com/users/chigozienri/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chigozienri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chigozienri/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chigozienri",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,701 | null | NONE | null | ### Describe the bug
Example discussed in detail here: https://huggingface.co/datasets/sasha/birdsnap/discussions/1
### Steps to reproduce the bug
```
from datasets import load_dataset, VerificationMode
dataset = load_dataset(
'sasha/birdsnap',
split="train",
verification_mode=VerificationMode.ALL_CHECKS,
streaming=True # I recommend using streaming=True when reproducing, as this dataset is large
)
for idx, row in enumerate(dataset):
# Iterating to 9287 took 7 minutes for me
# If you already have the data locally cached and set streaming=False, you see the same error just by with dataset[9287]
pass
# error at 9287 OSError: image file is truncated (45 bytes not processed)
# note that we can't avoid the error using a try/except + continue inside the loop
```
### Expected behavior
Able to escape errors in casting to Image() without killing the whole loop
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6470/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6467/comments | https://api.github.com/repos/huggingface/datasets/issues/6467/events | https://github.com/huggingface/datasets/issues/6467 | 2,023,174,233 | I_kwDODunzps54lzBZ | 6,467 | New version release request | {
"avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4",
"events_url": "https://api.github.com/users/LZHgrla/events{/privacy}",
"followers_url": "https://api.github.com/users/LZHgrla/followers",
"following_url": "https://api.github.com/users/LZHgrla/following{/other_user}",
"gists_url": "https://api.github.com/users/LZHgrla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LZHgrla",
"id": 36994684,
"login": "LZHgrla",
"node_id": "MDQ6VXNlcjM2OTk0Njg0",
"organizations_url": "https://api.github.com/users/LZHgrla/orgs",
"received_events_url": "https://api.github.com/users/LZHgrla/received_events",
"repos_url": "https://api.github.com/users/LZHgrla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LZHgrla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LZHgrla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LZHgrla",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"We will publish it soon (we usually do it in intervals of 1-2 months, so probably next week)",
"Thanks!"
] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Feature request
Hi!
I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0.
To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share it with us?
Thanks very much!
### Motivation
.
### Your contribution
. | {
"avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4",
"events_url": "https://api.github.com/users/LZHgrla/events{/privacy}",
"followers_url": "https://api.github.com/users/LZHgrla/followers",
"following_url": "https://api.github.com/users/LZHgrla/following{/other_user}",
"gists_url": "https://api.github.com/users/LZHgrla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LZHgrla",
"id": 36994684,
"login": "LZHgrla",
"node_id": "MDQ6VXNlcjM2OTk0Njg0",
"organizations_url": "https://api.github.com/users/LZHgrla/orgs",
"received_events_url": "https://api.github.com/users/LZHgrla/received_events",
"repos_url": "https://api.github.com/users/LZHgrla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LZHgrla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LZHgrla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LZHgrla",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6467/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6467/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6466/comments | https://api.github.com/repos/huggingface/datasets/issues/6466/events | https://github.com/huggingface/datasets/issues/6466 | 2,022,601,176 | I_kwDODunzps54jnHY | 6,466 | Can't align optional features of struct | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Friendly bump, I would be happy to work on this issue once I get the go-ahead from the dev team. ",
"Thanks for the PR!\r\n\r\nI'm struggling with this as well and would love to see this PR merged. My case is slightly different, with keys completely missing rather than being `None`:\r\n\r\n```\r\nds = Dataset.from_dict({'speaker': [{'name': 'Ben'}]})\r\nds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': '[email protected]'}]})\r\nprint(concatenate_datasets([ds, ds2]).features)\r\nprint(concatenate_datasets([ds, ds2]).to_dict())\r\n```\r\n\r\nI would expect this to work as well because other Dataset functions already handle this situation well. For example, this works just as expected:\r\n\r\n```\r\nds = Dataset.from_dict({'n': [1,2]})\r\nds_mapped = ds.map(lambda x: {\r\n 'speaker': {'name': 'Ben'} if x['n'] == 1 else {'name': 'Fred', 'email': '[email protected]'}\r\n})\r\nprint(ds_mapped)\r\n```",
"@vova-cyberhaven can you check with the new release if it fixes your issue? "
] | 1970-01-01T00:00:00.000001 | 1,708 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
Hello!
I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional.
I have a column named `speaker`, and this holds some information about a speaker.
```python
@dataclass
class Speaker:
name: str
email: Optional[str]
```
If I have two datasets, one happens to have `email` always None, then I get `The features can't be aligned because the key email of features`
### Steps to reproduce the bug
You can run the following script:
```python
ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]})
ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': '[email protected]'}]})
concatenate_datasets([ds, ds2])
>>>The features can't be aligned because the key speaker of features {'speaker': {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)}} has unexpected type - {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)} (expected either {'email': Value(dtype='null', id=None), 'name': Value(dtype='string', id=None)} or Value("null").
```
### Expected behavior
I think this should work; if two top-level columns were in the same situation it would properly cast to `string`.
```python
ds = Dataset.from_dict({'email': [None, None]})
ds2 = Dataset.from_dict({'email': ['[email protected]', '[email protected]']})
concatenate_datasets([ds, ds2])
>>> # Works!
```
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.9.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
- `fsspec` version: 2023.6.0
I would be happy to fix this issue. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6466/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6466/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6465/comments | https://api.github.com/repos/huggingface/datasets/issues/6465/events | https://github.com/huggingface/datasets/issues/6465 | 2,022,212,468 | I_kwDODunzps54iIN0 | 6,465 | `load_dataset` uses out-of-date cache instead of re-downloading a changed dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/3391297?v=4",
"events_url": "https://api.github.com/users/mnoukhov/events{/privacy}",
"followers_url": "https://api.github.com/users/mnoukhov/followers",
"following_url": "https://api.github.com/users/mnoukhov/following{/other_user}",
"gists_url": "https://api.github.com/users/mnoukhov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mnoukhov",
"id": 3391297,
"login": "mnoukhov",
"node_id": "MDQ6VXNlcjMzOTEyOTc=",
"organizations_url": "https://api.github.com/users/mnoukhov/orgs",
"received_events_url": "https://api.github.com/users/mnoukhov/received_events",
"repos_url": "https://api.github.com/users/mnoukhov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mnoukhov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnoukhov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mnoukhov",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi, thanks for reporting! https://github.com/huggingface/datasets/pull/6459 will fix this.",
"I meet a similar problem as using loading scripts. I have to set download_mode='force_redownload' to load the latest script."
] | 1970-01-01T00:00:00.000001 | 1,724 | null | NONE | null | ### Describe the bug
When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset
### Steps to reproduce the bug
Here is a minimal example script to
1. create an initial dataset and upload
2. download it so it is stored in cache
3. change the dataset and re-upload
4. redownload
```python
import time
from datasets import Dataset, DatasetDict, DownloadMode, load_dataset
username = "YOUR_USERNAME_HERE"
initial = Dataset.from_dict({"foo": [1, 2, 3]})
print(f"Intial {initial['foo']}")
initial_ds = DatasetDict({"train": initial})
initial_ds.push_to_hub("test")
time.sleep(1)
download = load_dataset(f"{username}/test", split="train")
changed = download.map(lambda x: {"foo": x["foo"] + 1})
print(f"Changed {changed['foo']}")
changed.push_to_hub("test")
time.sleep(1)
download_again = load_dataset(f"{username}/test", split="train")
print(f"Download Changed {download_again['foo']}")
# >>> gives the out-dated [1,2,3] when it should be changed [2,3,4]
```
The redownloaded dataset should be the changed dataset but it is actually the cached, initial dataset. Force-redownloading gives the correct dataset
```python
download_again_force = load_dataset(f"{username}/test", split="train", download_mode=DownloadMode.FORCE_REDOWNLOAD)
print(f"Force Download Changed {download_again_force['foo']}")
# >>> [2,3,4]
```
### Expected behavior
I assumed there should be some sort of hashing that should check for changes in the dataset and re-download if the hashes don't match
### Environment info
- `datasets` version: 2.15.0 │
- Platform: Linux-5.15.0-1028-nvidia-x86_64-with-glibc2.17 │
- Python version: 3.8.17 │
- `huggingface_hub` version: 0.19.4 │
- PyArrow version: 13.0.0 │
- Pandas version: 2.0.3 │
- `fsspec` version: 2023.6.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6465/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6465/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6460/comments | https://api.github.com/repos/huggingface/datasets/issues/6460/events | https://github.com/huggingface/datasets/issues/6460 | 2,017,433,899 | I_kwDODunzps54P5kr | 6,460 | jsonlines files don't load with `load_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/41377532?v=4",
"events_url": "https://api.github.com/users/serenalotreck/events{/privacy}",
"followers_url": "https://api.github.com/users/serenalotreck/followers",
"following_url": "https://api.github.com/users/serenalotreck/following{/other_user}",
"gists_url": "https://api.github.com/users/serenalotreck/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/serenalotreck",
"id": 41377532,
"login": "serenalotreck",
"node_id": "MDQ6VXNlcjQxMzc3NTMy",
"organizations_url": "https://api.github.com/users/serenalotreck/orgs",
"received_events_url": "https://api.github.com/users/serenalotreck/received_events",
"repos_url": "https://api.github.com/users/serenalotreck/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/serenalotreck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serenalotreck/subscriptions",
"type": "User",
"url": "https://api.github.com/users/serenalotreck",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi @serenalotreck,\r\n\r\nWe use Apache Arrow `pyarrow` to read jsonlines and it throws an error when trying to load your data files:\r\n```python\r\nIn [1]: import pyarrow as pa\r\n\r\nIn [2]: data = pa.json.read_json(\"train.jsonl\")\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-14-e9b104832528> in <module>\r\n----> 1 data = pa.json.read_json(\"train.jsonl\")\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json()\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0\r\n```\r\n\r\nI think it has to do with the data structure of the fields \"ner\" (and also \"relations\"):\r\n```json\r\n\"ner\": [\r\n [\r\n [0, 4, \"Biochemical_process\"], \r\n [15, 16, \"Protein\"]\r\n ], \r\n```\r\nArrow interprets this data structure as an array, an arrays contain just a single data type: \r\n- when reading sequentially, it finds first the `0` and infers that the data is of type `number`;\r\n- when it finds the string `\"Biochemical_process\"`, it cannot cast it to number and throws the `ArrowInvalid` error\r\n\r\nOne solution could be to change the data structure of your data files. Any other ideas, @huggingface/datasets ?",
"Hi @albertvillanova, \r\n\r\nThanks for the explanation! To the best of my knowledge, arrays in a json [can contain multiple data types](https://docs.actian.com/ingres/11.2/index.html#page/SQLRef/Data_Types.htm), and I'm able to read these files with the `jsonlines` package. Is the requirement for arrays to only have one data type specific to PyArrow?\r\n\r\nI'd prefer to keep the data structure as is, since it's a specific input requirement for the models this data was generated for. Any thoughts on how to enable the use of `load_dataset` with this dataset would be great!",
"Hi again @serenalotreck,\r\n\r\nYes, it is specific to PyArrow: as far as I know, Arrow does not support arrays with multiple data types.\r\n\r\nAs this is related specifically to your dataset structure (and not the `datasets` library), I have created a dedicated issue in your dataset page: https://huggingface.co/datasets/slotreck/pickle/discussions/1\r\n\r\nLet's continue the discussion there! :hugs: ",
"> Hi again @serenalotreck,\r\n> \r\n> Yes, it is specific to PyArrow: as far as I know, Arrow does not support arrays with multiple data types.\r\n> \r\n> As this is related specifically to your dataset structure (and not the `datasets` library), I have created a dedicated issue in your dataset page: https://huggingface.co/datasets/slotreck/pickle/discussions/1\r\n> \r\n> Let's continue the discussion there! 🤗\r\n\r\nThis is really terrible. My JSONL format data is very simple, but I still report this error\r\n\r\nThe error message is as follows:\r\n File \"pyarrow/_json.pyx\", line 290, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Column(/inputs) changed from string to number in row 208\r\n"
] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
While [the docs](https://huggingface.co/docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`.
### Steps to reproduce the bug
Code:
```
from datasets import load_dataset
dset = load_dataset('slotreck/pickle')
```
Traceback:
```
Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 925/925 [00:00<00:00, 3.11MB/s]
Downloading and preparing dataset json/slotreck--pickle to /mnt/home/lotrecks/.cache/huggingface/datasets/slotreck___json/slotreck--pickle-0c311f36ed032b04/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96...
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 589k/589k [00:00<00:00, 18.9MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 104k/104k [00:00<00:00, 4.61MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 170k/170k [00:00<00:00, 7.71MB/s]
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.77it/s]
Extracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 523.92it/s]
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/mnt/home/lotrecks/.cache/huggingface/datasets/downloads/6ec07bb2f279c9377036af6948532513fa8f48244c672d2644a2d7018ee5c9cb' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0
Traceback (most recent call last):
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 144, in _generate_tables
dataset = json.load(f)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 296, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 3086)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 147, in _generate_tables
raise e
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 259, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/load.py", line 1815, in load_dataset
storage_options=storage_options,
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 913, in download_and_prepare
**download_and_prepare_kwargs,
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1768, in _prepare_split
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1912, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
For the dataset to be loaded without error.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 8.0.0
- Pandas version: 1.3.5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6460/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6460/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6457/comments | https://api.github.com/repos/huggingface/datasets/issues/6457/events | https://github.com/huggingface/datasets/issues/6457 | 2,015,650,563 | I_kwDODunzps54JGMD | 6,457 | `TypeError`: huggingface_hub.hf_file_system.HfFileSystem.find() got multiple values for keyword argument 'maxdepth' | {
"avatar_url": "https://avatars.githubusercontent.com/u/79070834?v=4",
"events_url": "https://api.github.com/users/wasertech/events{/privacy}",
"followers_url": "https://api.github.com/users/wasertech/followers",
"following_url": "https://api.github.com/users/wasertech/following{/other_user}",
"gists_url": "https://api.github.com/users/wasertech/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wasertech",
"id": 79070834,
"login": "wasertech",
"node_id": "MDQ6VXNlcjc5MDcwODM0",
"organizations_url": "https://api.github.com/users/wasertech/orgs",
"received_events_url": "https://api.github.com/users/wasertech/received_events",
"repos_url": "https://api.github.com/users/wasertech/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wasertech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasertech/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wasertech",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Updating `fsspec>=2023.10.0` did solve the issue.",
"May be it should be pinned somewhere?",
"> Maybe this should go in datasets directly... anyways you can easily fix this error by updating datasets>=2.15.1.dev0.\r\n\r\n@lhoestq @mariosasko for what I understand this is a bug fixed in `datasets` already, right? No need to do anything in `huggingface_hub`?",
"I've opened a PR with a fix in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/pull/1875",
"Thanks! PR is merged and will be shipped in next release of `huggingface_hub`."
] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Steps to reproduce the bug
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Expected behavior
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Environment info
Please see https://github.com/huggingface/huggingface_hub/issues/1872 | {
"avatar_url": "https://avatars.githubusercontent.com/u/79070834?v=4",
"events_url": "https://api.github.com/users/wasertech/events{/privacy}",
"followers_url": "https://api.github.com/users/wasertech/followers",
"following_url": "https://api.github.com/users/wasertech/following{/other_user}",
"gists_url": "https://api.github.com/users/wasertech/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wasertech",
"id": 79070834,
"login": "wasertech",
"node_id": "MDQ6VXNlcjc5MDcwODM0",
"organizations_url": "https://api.github.com/users/wasertech/orgs",
"received_events_url": "https://api.github.com/users/wasertech/received_events",
"repos_url": "https://api.github.com/users/wasertech/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wasertech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasertech/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wasertech",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6457/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6457/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6451/comments | https://api.github.com/repos/huggingface/datasets/issues/6451/events | https://github.com/huggingface/datasets/issues/6451 | 2,010,693,912 | I_kwDODunzps532MEY | 6,451 | Unable to read "marsyas/gtzan" data | {
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gerald-wrona",
"id": 32300890,
"login": "gerald-wrona",
"node_id": "MDQ6VXNlcjMyMzAwODkw",
"organizations_url": "https://api.github.com/users/gerald-wrona/orgs",
"received_events_url": "https://api.github.com/users/gerald-wrona/received_events",
"repos_url": "https://api.github.com/users/gerald-wrona/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gerald-wrona",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! We've merged a [PR](https://huggingface.co/datasets/marsyas/gtzan/discussions/1) that fixes the script's path logic on Windows.",
"I have transferred the discussion to the corresponding dataset: https://huggingface.co/datasets/marsyas/gtzan/discussions/2\r\n\r\nLet's continue there.",
"@mariosasko @albertvillanova \r\n\r\nThank you both very much for the speedy resolution :)"
] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | NONE | null | Hi, this is my code and the error:
```
from datasets import load_dataset
gtzan = load_dataset("marsyas/gtzan", "all")
```
[error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt)
[audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt)
Python 3.11.5
Jupyter Notebook 6.5.4
Windows 10
I'm able to download and work with other datasets, but not this one. For example, both these below work fine:
```
from datasets import load_dataset
dataset = load_dataset("facebook/voxpopuli", "pl", split="train", streaming=True)
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
Thanks for your help
https://huggingface.co/datasets/marsyas/gtzan/tree/main | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6451/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6451/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6450/comments | https://api.github.com/repos/huggingface/datasets/issues/6450/events | https://github.com/huggingface/datasets/issues/6450 | 2,009,491,386 | I_kwDODunzps53xme6 | 6,450 | Support multiple image/audio columns in ImageFolder/AudioFolder | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"A duplicate of https://github.com/huggingface/datasets/issues/5760"
] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | COLLABORATOR | null | ### Feature request
Have a metadata.csv file with multiple columns that point to relative image or audio files.
### Motivation
Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative audio files.
But it's not possible to have two image columns, or to have two audio column, or to have one audio column and one image column.
### Your contribution
no specific contribution | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6450/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6450/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6447/comments | https://api.github.com/repos/huggingface/datasets/issues/6447/events | https://github.com/huggingface/datasets/issues/6447 | 2,008,195,298 | I_kwDODunzps53sqDi | 6,447 | Support one dataset loader per config when using YAML | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,700 | null | COLLABORATOR | null | ### Feature request
See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1
I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc.
### Motivation
It would be more flexible for the users
### Your contribution
No specific contribution | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6447/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6446/comments | https://api.github.com/repos/huggingface/datasets/issues/6446/events | https://github.com/huggingface/datasets/issues/6446 | 2,007,092,708 | I_kwDODunzps53oc3k | 6,446 | Speech Commands v2 dataset doesn't match AST-v2 config | {
"avatar_url": "https://avatars.githubusercontent.com/u/18024303?v=4",
"events_url": "https://api.github.com/users/vymao/events{/privacy}",
"followers_url": "https://api.github.com/users/vymao/followers",
"following_url": "https://api.github.com/users/vymao/following{/other_user}",
"gists_url": "https://api.github.com/users/vymao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vymao",
"id": 18024303,
"login": "vymao",
"node_id": "MDQ6VXNlcjE4MDI0MzAz",
"organizations_url": "https://api.github.com/users/vymao/orgs",
"received_events_url": "https://api.github.com/users/vymao/received_events",
"repos_url": "https://api.github.com/users/vymao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vymao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vymao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vymao",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"You can use `.align_labels_with_mapping` on the dataset to align the labels with the model config.\r\n\r\nRegarding the number of labels, only the special `_silence_` label corresponding to noise is missing, which is consistent with the model paper (reports training on 35 labels). You can run a `.filter` to drop it.\r\n\r\nPS: You should create a discussion on a model/dataset repo (on the Hub) for these kinds of questions",
"Thanks, will keep that in mind. But I tried running `dataset_aligned = dataset.align_labels_with_mapping(model.config.id2label, 'label')`, and received this error: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/victor/anaconda3/envs/transformers-v2/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 5928, in align_labels_with_mapping\r\n label2id = {k.lower(): v for k, v in label2id.items()}\r\n File \"/Users/victor/anaconda3/envs/transformers-v2/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 5928, in <dictcomp>\r\n label2id = {k.lower(): v for k, v in label2id.items()}\r\nAttributeError: 'int' object has no attribute 'lower'\r\n```\r\nMy guess is that the dataset `label` column is purely an int ID, and I'm not sure there's a way to identify which class label the ID belongs to in the dataset easily.",
"Replacing `model.config.id2label` with `model.config.label2id` should fix the issue.\r\n\r\nSo, the full code to align the labels with the model config is as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoFeatureExtractor, AutoModelForAudioClassification\r\n\r\n# extractor = AutoFeatureExtractor.from_pretrained(\"MIT/ast-finetuned-speech-commands-v2\")\r\nmodel = AutoModelForAudioClassification.from_pretrained(\"MIT/ast-finetuned-speech-commands-v2\")\r\n\r\nds = load_dataset(\"speech_commands\", \"v0.02\")\r\nds = ds.filter(lambda label: label != ds[\"train\"].features[\"label\"].str2int(\"_silence_\"), input_columns=\"label\")\r\nds = ds.align_labels_with_mapping(model.config.label2id, \"label\")\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
[According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover, the class labels themselves don't match between the model config and the dataset. It is difficult to reproduce the data used to fine tune `MIT/ast-finetuned-speech-commands-v2`.
### Steps to reproduce the bug
```
>>> model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-speech-commands-v2")
>>> model.config.id2label
{0: 'backward', 1: 'follow', 2: 'five', 3: 'bed', 4: 'zero', 5: 'on', 6: 'learn', 7: 'two', 8: 'house', 9: 'tree', 10: 'dog', 11: 'stop', 12: 'seven', 13: 'eight', 14: 'down', 15: 'six', 16: 'forward', 17: 'cat', 18: 'right', 19: 'visual', 20: 'four', 21: 'wow', 22: 'no', 23: 'nine', 24: 'off', 25: 'three', 26: 'left', 27: 'marvin', 28: 'yes', 29: 'up', 30: 'sheila', 31: 'happy', 32: 'bird', 33: 'go', 34: 'one'}
>>> dataset = load_dataset("speech_commands", "v0.02", split="test")
>>> torch.unique(torch.Tensor(dataset['label']))
tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13.,
14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27.,
28., 29., 30., 31., 32., 33., 34., 35.])
```
If you try to explore the [dataset itself](https://huggingface.co/datasets/speech_commands/viewer/v0.02/test), you can see that the id to label does not match what is provided by `model.config.id2label`.
### Expected behavior
The labels should match completely and there should be the same number of label classes between the model config and the dataset itself.
### Environment info
datasets = 2.14.6, transformers = 4.33.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6446/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6446/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6443/comments | https://api.github.com/repos/huggingface/datasets/issues/6443/events | https://github.com/huggingface/datasets/issues/6443 | 2,006,568,368 | I_kwDODunzps53mc2w | 6,443 | Trouble loading files defined in YAML explicitly | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"There is a typo in one of the file names - `data/edf.csv` should be renamed to `data/def.csv` 🙂. ",
"wow, I reviewed it twice to avoid being ashamed like that, but... I didn't notice the typo.\r\n\r\n---\r\n\r\nBesides this: do you think we would be able to improve the error message to make this clearer?"
] | 1970-01-01T00:00:00.000001 | 1,700 | null | COLLABORATOR | null | Look at https://huggingface.co/datasets/severo/doc-yaml-2
It's a reproduction of the example given in the docs at https://huggingface.co/docs/hub/datasets-manual-configuration
```
You can select multiple files per split using a list of paths:
my_dataset_repository/
├── README.md
├── data/
│ ├── abc.csv
│ └── def.csv
└── holdout/
└── ghi.csv
---
configs:
- config_name: default
data_files:
- split: train
path:
- "data/abc.csv"
- "data/def.csv"
- split: test
path: "holdout/ghi.csv"
---
```
It raises the following error:
```
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1507, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6443/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6443/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6442/comments | https://api.github.com/repos/huggingface/datasets/issues/6442/events | https://github.com/huggingface/datasets/issues/6442 | 2,006,086,907 | I_kwDODunzps53knT7 | 6,442 | Trouble loading image folder with additional features - metadata file ignored | {
"avatar_url": "https://avatars.githubusercontent.com/u/57615435?v=4",
"events_url": "https://api.github.com/users/linoytsaban/events{/privacy}",
"followers_url": "https://api.github.com/users/linoytsaban/followers",
"following_url": "https://api.github.com/users/linoytsaban/following{/other_user}",
"gists_url": "https://api.github.com/users/linoytsaban/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/linoytsaban",
"id": 57615435,
"login": "linoytsaban",
"node_id": "MDQ6VXNlcjU3NjE1NDM1",
"organizations_url": "https://api.github.com/users/linoytsaban/orgs",
"received_events_url": "https://api.github.com/users/linoytsaban/received_events",
"repos_url": "https://api.github.com/users/linoytsaban/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/linoytsaban/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/linoytsaban/subscriptions",
"type": "User",
"url": "https://api.github.com/users/linoytsaban",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I reproduced too:\r\n- root: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-3)\r\n- data/ dir: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-4)\r\n- train/ dir: works (https://huggingface.co/datasets/severo/doc-image-5)"
] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions.
When loading a local image folder with captions using `datasets==2.13.0`
```
from datasets import load_dataset
data = load_dataset(<image_folder_path>)
data.column_names
```
yields
`{'train': ['image', 'prompt']}`
but when using `datasets==2.15.0`
yeilds
`{'train': ['image']}`
Putting the images and `metadata.jsonl` file into a nested `train` folder **or** loading with `load_dataset("imagefolder", data_dir=<image_folder_path>)` solves the issue and
yields
`{'train': ['image', 'prompt']}`
### Steps to reproduce the bug
1. create a folder `<image_folder_path>` that contains images and a metadata file with additional features- e.g. "prompt"
2. run:
```
from datasets import load_dataset
data = load_dataset("<image_folder_path>")
data.column_names
```
### Expected behavior
`{'train': ['image', 'prompt']}`
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6442/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6442/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6441/comments | https://api.github.com/repos/huggingface/datasets/issues/6441/events | https://github.com/huggingface/datasets/issues/6441 | 2,004,985,857 | I_kwDODunzps53gagB | 6,441 | Trouble Loading a Gated Dataset For User with Granted Permission | {
"avatar_url": "https://avatars.githubusercontent.com/u/124715309?v=4",
"events_url": "https://api.github.com/users/e-trop/events{/privacy}",
"followers_url": "https://api.github.com/users/e-trop/followers",
"following_url": "https://api.github.com/users/e-trop/following{/other_user}",
"gists_url": "https://api.github.com/users/e-trop/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/e-trop",
"id": 124715309,
"login": "e-trop",
"node_id": "U_kgDOB28BLQ",
"organizations_url": "https://api.github.com/users/e-trop/orgs",
"received_events_url": "https://api.github.com/users/e-trop/received_events",
"repos_url": "https://api.github.com/users/e-trop/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/e-trop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-trop/subscriptions",
"type": "User",
"url": "https://api.github.com/users/e-trop",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"> Also when they try to click the url link for the dataset they get a 404 error.\r\n\r\nThis seems to be a Hub error then (cc @SBrandeis)",
"Could you report this to https://discuss.huggingface.co/c/hub/23, providing the URL of the dataset, or at least if the dataset is public or private?",
"Thanks for the reply! I've created an issue on the hub's board here: https://discuss.huggingface.co/t/trouble-loading-a-gated-dataset-for-user-with-granted-permission/65565"
] | 1970-01-01T00:00:00.000001 | 1,702 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get
`FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the dataset they get a 404 error.
### Steps to reproduce the bug
1. Grant access to gated dataset for specific users
2. Users accept invitation
3. Users login to hugging face hub using cli login
4. Users run load_dataset
### Expected behavior
Dataset is loaded normally for users who were granted access to the gated dataset.
### Environment info
datasets==2.15.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6441/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6441/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6440/comments | https://api.github.com/repos/huggingface/datasets/issues/6440/events | https://github.com/huggingface/datasets/issues/6440 | 2,004,509,301 | I_kwDODunzps53emJ1 | 6,440 | `.map` not hashing under python 3.9 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4",
"events_url": "https://api.github.com/users/changyeli/events{/privacy}",
"followers_url": "https://api.github.com/users/changyeli/followers",
"following_url": "https://api.github.com/users/changyeli/following{/other_user}",
"gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/changyeli",
"id": 9058204,
"login": "changyeli",
"node_id": "MDQ6VXNlcjkwNTgyMDQ=",
"organizations_url": "https://api.github.com/users/changyeli/orgs",
"received_events_url": "https://api.github.com/users/changyeli/received_events",
"repos_url": "https://api.github.com/users/changyeli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/changyeli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/changyeli",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Tried to upgrade Python to 3.11 - still get this message. A partial solution is to NOT use `num_proc` at all. It will be considerably longer to finish the job.",
"Hi! The `model = torch.compile(model)` line is problematic for our hashing logic. We would have to merge https://github.com/huggingface/datasets/pull/5867 to support hashing `torch.compile`-ed models/functions. \r\n\r\nI've started refactoring the hashing logic and plan to incorporate a fix for `torch.compile` as part of it, so this should be addressed soon (probably this or next week). "
] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message:
`Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`
### Steps to reproduce the bug
```python
def map_to_pred(batch):
"""
Perform inference on an audio batch
Parameters:
batch (dict): A dictionary containing audio data and other related information.
Returns:
dict: The input batch dictionary with added prediction and transcription fields.
"""
audio = batch['audio']
input_features = processor(
audio['array'], sampling_rate=audio['sampling_rate'], return_tensors="pt").input_features
input_features = input_features.to('cuda')
with torch.no_grad():
predicted_ids = model.generate(input_features)
preds = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
batch['prediction'] = processor.tokenizer._normalize(preds)
batch["transcription"] = processor.tokenizer._normalize(batch['transcription'])
return batch
MODEL_CARD = "openai/whisper-small"
MODEL_NAME = MODEL_CARD.rsplit('/', maxsplit=1)[-1]
model = WhisperForConditionalGeneration.from_pretrained(MODEL_CARD)
processor = AutoProcessor.from_pretrained(
MODEL_CARD, language="english", task="transcribe")
model = torch.compile(model)
dt = load_dataset("audiofolder", data_dir=config['DATA']['dataset'], split="test")
dt = dt.cast_column("audio", Audio(sampling_rate=16000))
result = coraal_dt.map(map_to_pred, num_proc=16)
```
### Expected behavior
Hashed and cached dataset starts inferencing
### Environment info
- `transformers` version: 4.35.0
- Platform: Linux-5.14.0-284.30.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6440/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6440/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6439/comments | https://api.github.com/repos/huggingface/datasets/issues/6439/events | https://github.com/huggingface/datasets/issues/6439 | 2,002,916,514 | I_kwDODunzps53YhSi | 6,439 | Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding | {
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AntreasAntoniou",
"id": 10792502,
"login": "AntreasAntoniou",
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AntreasAntoniou",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,700 | null | NONE | null | ### Describe the bug
I am working with a dataset I am trying to publish.
The path is Antreas/TALI.
It's a fairly large dataset, and contains images, video, audio and text.
I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers taking more than 7 days to process.
With snapshot download it takes 12 hours, and that includes the dataset preparation done using load_dataset and passing the dataset parquet file paths.
Find the script I am using below:
```python
import multiprocessing as mp
import pathlib
from typing import Optional
import datasets
from rich import print
from tqdm import tqdm
def download_dataset_via_hub(
dataset_name: str,
dataset_download_path: pathlib.Path,
num_download_workers: int = mp.cpu_count(),
):
import huggingface_hub as hf_hub
download_folder = hf_hub.snapshot_download(
repo_id=dataset_name,
repo_type="dataset",
cache_dir=dataset_download_path,
resume_download=True,
max_workers=num_download_workers,
ignore_patterns=[],
)
return pathlib.Path(download_folder) / "data"
def load_dataset_via_hub(
dataset_download_path: pathlib.Path,
num_download_workers: int = mp.cpu_count(),
dataset_name: Optional[str] = None,
):
from dataclasses import dataclass, field
from datasets import ClassLabel, Features, Image, Sequence, Value
dataset_path = download_dataset_via_hub(
dataset_download_path=dataset_download_path,
num_download_workers=num_download_workers,
dataset_name=dataset_name,
)
# Building a list of file paths for validation set
train_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "train" in file.as_posix()
]
val_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "val" in file.as_posix()
]
test_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "test" in file.as_posix()
]
print(
f"Found {len(test_files)} files for testing set, {len(train_files)} for training set and {len(val_files)} for validation set"
)
data_files = {
"test": test_files,
"val": val_files,
"train": train_files,
}
features = Features(
{
"image": Image(
decode=True
), # Set `decode=True` if you want to decode the images, otherwise `decode=False`
"image_url": Value("string"),
"item_idx": Value("int64"),
"wit_features": Sequence(
{
"attribution_passes_lang_id": Value("bool"),
"caption_alt_text_description": Value("string"),
"caption_reference_description": Value("string"),
"caption_title_and_reference_description": Value("string"),
"context_page_description": Value("string"),
"context_section_description": Value("string"),
"hierarchical_section_title": Value("string"),
"is_main_image": Value("bool"),
"language": Value("string"),
"page_changed_recently": Value("bool"),
"page_title": Value("string"),
"page_url": Value("string"),
"section_title": Value("string"),
}
),
"wit_idx": Value("int64"),
"youtube_title_text": Value("string"),
"youtube_description_text": Value("string"),
"youtube_video_content": Value("binary"),
"youtube_video_starting_time": Value("string"),
"youtube_subtitle_text": Value("string"),
"youtube_video_size": Value("int64"),
"youtube_video_file_path": Value("string"),
}
)
dataset = datasets.load_dataset(
"parquet" if dataset_name is None else dataset_name,
data_files=data_files,
features=features,
num_proc=1,
cache_dir=dataset_download_path / "cache",
)
return dataset
if __name__ == "__main__":
dataset_cache = pathlib.Path("/disk/scratch_fast0/tali/")
dataset = load_dataset_via_hub(dataset_cache, dataset_name="Antreas/TALI")[
"test"
]
for sample in tqdm(dataset):
print(list(sample.keys()))
```
Also, streaming this dataset has been a very painfully slow process. Streaming the train set takes 15m to start, and streaming the test and val sets takes 3 hours to start!
### Steps to reproduce the bug
1. Run the code I provided to get a sense of how fast snapshot + manual is
2. Run datasets.load_dataset("Antreas/TALI") to get a sense of the speed of that OP.
3. You should now have an appreciation of how long these things take.
### Expected behavior
The load dataset function should be at least as fast as the huggingface snapshot download function in terms of downloading dataset files. Not 20 times slower.
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6439/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6438/comments | https://api.github.com/repos/huggingface/datasets/issues/6438/events | https://github.com/huggingface/datasets/issues/6438 | 2,002,032,804 | I_kwDODunzps53VJik | 6,438 | Support GeoParquet | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Thank you, @severo ! I would be more than happy to help in any way I can. I am not familiar with this repo's codebase, but I would be eager to contribute. :)\r\n\r\nFor the preview in Datasets Hub, I think it makes sense to just display the geospatial column as text. If there were a dataset loader, though, I think it should be able to support the geospatial components. Geopandas is probably the most user-friendly interface for that. I'm not sure if it's currently relevant in the context of geoparquet, but I think the pyogrio driver is faster than fiona.\r\n\r\nBut the whole gdal dependency thing can be a real pain. If anything, it would need to be an optional dependency. Maybe it would be best if the loader tries importing relevant geospatial libraries, and in the event of an ImportError, falls back to text for the geometry column.\r\n\r\nPlease let me know if I can be of assistance, and thanks again for creating this Issue. :)",
"Just hitting into this same issue too showing GeoParquet files in Datasets Viewer. I tried to implement a custom reader for GeoParquet in https://huggingface.co/datasets/weiji14/clay_vector_embeddings/discussions/1, but it seems like HuggingFace has disabled datasets with custom loading scripts from using the dataset viewer according to https://discuss.huggingface.co/t/dataset-repo-requires-arbitrary-python-code-execution/59346 :frowning_face: \r\n\r\n\r\n\r\nI'm thinking now if there's a way to simply map files with GeoParquet extensions (*.gpq, *.geoparquet, etc) to use the Parquet reader. Maybe we could allowlist these geoparquet file extensions at https://github.com/huggingface/datasets/blame/0caf91285116ec910f409e82cc6e1f4cff7496e3/src/datasets/packaged_modules/__init__.py#L30-L51? Having the table columns show up would be a quick win.\r\n\r\nLonger term though, it would certainly be nice if the WKB geometry columns could be displayed in a nicer form. Geopandas' [read_parquet](https://geopandas.org/en/v0.14.1/docs/reference/api/geopandas.read_parquet.html) function is supposedly faster than `pyogrio.read_dataframe` according to https://github.com/geopandas/geopandas/discussions/2724#discussioncomment-4606048, but there's also [`pyogrio.raw.read_arrow`](https://pyogrio.readthedocs.io/en/latest/api.html#pyogrio.raw.read_arrow) now that can read into a `pyarrow.Table` directly.",
"Update: It looks like renaming the GeoParquet file to have a file extension of `*.parquet` works (see https://huggingface.co/datasets/weiji14/clay_vector_embeddings). HuggingFace's default parquet reader is able to read the GeoParquet file, though the geometry column is of an unknown type:\r\n\r\n\r\n\r\nI've opened a quick PR at #6508 to allow files with a `*.geoparquet` or `*.gpq` extension to be read using the default Parquet reader. Let's see how that goes :smile:",
"@joshuasundance-swca, @weiji14, If I'm understanding this correctly, the code below wouldn't be recommended to due to dependency headaches? If that's the case, what solution would there be to see the geometry features for .gpq files in huggingfaceHub? \r\n\r\ncode for dataset_loader.py\r\n```\r\nimport geopandas as gpd\r\n# ... (other imports remain the same)\r\n\r\nclass ClayVectorEmbeddings(datasets.ArrowBasedBuilder):\r\n # ... (other parts of the class remain the same)\r\n\r\n def _info(self):\r\n # Read the GeoParquet file to get the schema for the 'geometry' feature\r\n gdf = gpd.read_file(\"path/to/your/geoparquet/file.gpq\") # Replace with your file path\r\n geometry_schema = str(gdf.geometry.dtype)\r\n\r\n return datasets.DatasetInfo(\r\n # This is the description that will appear on the datasets page.\r\n description=\"Clay Vector Embeddings in GeoParquet format.\",\r\n # This defines the different columns of the dataset and their types\r\n features=datasets.Features(\r\n {\r\n \"source_url\": datasets.Value(dtype=\"string\"),\r\n \"date\": datasets.Value(dtype=\"date32\"),\r\n \"embeddings\": datasets.Value(\"string\"),\r\n \"geometry\": datasets.Value(dtype=geometry_schema), # Use the schema read by GeoPandas\r\n # ... (other features)\r\n }\r\n ),\r\n )\r\n\r\n# ... (rest of the script remains the same)\r\n\r\n```",
"Hi @mehrdad-es, I'm not sure if HuggingFace would be keen to add `geopandas` to HuggingFace Hub (maybe a question for @severo?). Having a geometry viewer would be an even bigger task, and if you're thinking of a map-viewer, it might involve some redesign of the website UI. Some of my colleagues are working on streamlining GeoParquet visualization from cloud-hosted instances like HuggingFace (see e.g. https://github.com/developmentseed/lonboard/issues/314), and we could definitely come up with something if there's interest.",
"I've created https://github.com/huggingface/datasets-server/issues/2416 to discuss the possibility of supporting (vectorial) geospatial columns in the dataset viewer, or in the converted parquet files.\r\n\r\nAt the same time, it would be super interesting to see what is already possible to do with a Hugging Face dataset that hosts geospatial data. \r\n\r\n> Some of my colleagues are working on streamlining GeoParquet visualization from cloud-hosted instances like HuggingFace (see e.g. https://github.com/developmentseed/lonboard/issues/314), and we could definitely come up with something if there's interest.\r\n\r\nIt would be awesome to show this inside a [Space](https://huggingface.co/docs/hub/spaces)."
] | 1970-01-01T00:00:00.000001 | 1,707 | null | COLLABORATOR | null | ### Feature request
Support the GeoParquet format
### Motivation
GeoParquet (https://geoparquet.org/) is a common format for sharing vectorial geospatial data on the cloud, along with "traditional" data columns.
It would be nice to be able to load this format with datasets, and more generally, in the Datasets Hub (see https://huggingface.co/datasets/joshuasundance/govgis_nov2023-slim-spatial/discussions/1).
### Your contribution
I would be happy to help work on a PR (but I don't think I can do one on my own).
Also, we have to define what we want to support:
- load all the columns, but get the "geospatial" column in text-only mode for now
- or, fully support the spatial features, maybe taking inspiration from (or depending upon) https://geopandas.org/en/stable/index.html (which itself depends on https://fiona.readthedocs.io/en/stable/, which requires a local install of https://gdal.org/) | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6438/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6438/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6437/comments | https://api.github.com/repos/huggingface/datasets/issues/6437/events | https://github.com/huggingface/datasets/issues/6437 | 2,001,272,606 | I_kwDODunzps53SP8e | 6,437 | Problem in training iterable dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38107672?v=4",
"events_url": "https://api.github.com/users/21Timothy/events{/privacy}",
"followers_url": "https://api.github.com/users/21Timothy/followers",
"following_url": "https://api.github.com/users/21Timothy/following{/other_user}",
"gists_url": "https://api.github.com/users/21Timothy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/21Timothy",
"id": 38107672,
"login": "21Timothy",
"node_id": "MDQ6VXNlcjM4MTA3Njcy",
"organizations_url": "https://api.github.com/users/21Timothy/orgs",
"received_events_url": "https://api.github.com/users/21Timothy/received_events",
"repos_url": "https://api.github.com/users/21Timothy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/21Timothy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/21Timothy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/21Timothy",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Has anyone ever encountered this problem before?",
"`split_dataset_by_node` doesn't give the exact same number of examples to each node in the case of iterable datasets, though it tries to be as equal as possible. In particular if your dataset is sharded and you have a number of shards that is a factor of the number of workers, then the shards will be evenly distributed among workers. If the shards don't contain the same number of examples, then some workers might end up with more examples than others.\r\n\r\nHowever if you use a Dataset you'll end up with the same amount of data, because we know the length of the dataset we can split it exactly where we want. Also Dataset objects don't load the full dataset in memory; instead it memory maps Arrow files from disk.",
"> `split_dataset_by_node` doesn't give the exact same number of examples to each node in the case of iterable datasets, though it tries to be as equal as possible. In particular if your dataset is sharded and you have a number of shards that is a factor of the number of workers, then the shards will be evenly distributed among workers. If the shards don't contain the same number of examples, then some workers might end up with more examples than others.\r\n> \r\n> However if you use a Dataset you'll end up with the same amount of data, because we know the length of the dataset we can split it exactly where we want. Also Dataset objects don't load the full dataset in memory; instead it memory maps Arrow files from disk.\r\n\r\nThanks for your answer! I finally solve it by using the torch.distributed.algorithms.join.Join. I think maybe some rookie like me would face the same question the day after tomorrow hh.",
"Great ! Maybe it can be worth having an example that we can include in the docs for other people, did you need anything else than the Join context manager used with the model and optimizer ?",
"> Great ! Maybe it can be worth having an example that we can include in the docs for other people, did you need anything else than the Join context manager used with the model and optimizer ?\r\n\r\nI think it's none. I have tried barrier() to solve the problem but I failed. Maybe it's a tool for other situation."
] | 1970-01-01T00:00:00.000001 | 1,716 | null | NONE | null | ### Describe the bug
I am using PyTorch DDP (Distributed Data Parallel) to train my model. Since the data is too large to load into memory at once, I am using load_dataset to read the data as an iterable dataset. I have used datasets.distributed.split_dataset_by_node to distribute the dataset. However, I have noticed that this distribution results in different processes having different amounts of data to train on. As a result, when the earliest process finishes training and starts predicting on the test set, other processes are still training, causing the overall training speed to be very slow.
### Steps to reproduce the bug
```
def train(args, model, device, train_loader, optimizer, criterion, epoch, length):
model.train()
idx_length = 0
for batch_idx, data in enumerate(train_loader):
s_time = time.time()
X = data['X']
target = data['y'].reshape(-1, 28)
X, target = X.to(device), target.to(device)
optimizer.zero_grad()
output = model(X)
loss = criterion(output, target)
loss.backward()
optimizer.step()
idx_length += 1
if batch_idx % args.log_interval == 0:
# print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
# epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(),
# 100. * batch_idx * len(
# X) * torch.distributed.get_world_size() / length, loss.item()))
print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\t'.format(
epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(),
100. * batch_idx * len(
X) * torch.distributed.get_world_size() / length))
if args.dry_run:
break
print('Process %s length: %s time: %s' % (torch.distributed.get_rank(), idx_length, datetime.datetime.now()))
train_iterable_dataset = load_dataset("parquet", data_files=data_files, split="train", streaming=True)
test_iterable_dataset = load_dataset("parquet", data_files=data_files, split="test", streaming=True)
train_iterable_dataset = train_iterable_dataset.map(process_fn)
test_iterable_dataset = test_iterable_dataset.map(process_fn)
train_iterable_dataset = train_iterable_dataset.map(scale)
test_iterable_dataset = test_iterable_dataset.map(scale)
train_iterable_dataset = datasets.distributed.split_dataset_by_node(train_iterable_dataset,
world_size=world_size, rank=local_rank).shuffle(seed=1234)
test_iterable_dataset = datasets.distributed.split_dataset_by_node(test_iterable_dataset,
world_size=world_size, rank=local_rank).shuffle(seed=1234)
print(torch.distributed.get_rank(), train_iterable_dataset.n_shards, test_iterable_dataset.n_shards)
train_kwargs = {'batch_size': args.batch_size}
test_kwargs = {'batch_size': args.test_batch_size}
if use_cuda:
cuda_kwargs = {'num_workers': 3,#ngpus_per_node,
'pin_memory': True,
'shuffle': False}
train_kwargs.update(cuda_kwargs)
test_kwargs.update(cuda_kwargs)
train_loader = torch.utils.data.DataLoader(train_iterable_dataset, **train_kwargs,
# sampler=torch.utils.data.distributed.DistributedSampler(
# train_iterable_dataset,
# num_replicas=ngpus_per_node,
# rank=0)
)
test_loader = torch.utils.data.DataLoader(test_iterable_dataset, **test_kwargs,
# sampler=torch.utils.data.distributed.DistributedSampler(
# test_iterable_dataset,
# num_replicas=ngpus_per_node,
# rank=0)
)
for epoch in range(1, args.epochs + 1):
start_time = time.time()
train_iterable_dataset.set_epoch(epoch)
test_iterable_dataset.set_epoch(epoch)
train(args, model, device, train_loader, optimizer, criterion, epoch, train_len)
test(args, model, device, criterion2, test_loader)
```
And here’s the part of output:
```
Train Epoch: 1 Batch_idx: 5000 Process: 0 [320000/4710975.0 (7%)]
Train Epoch: 1 Batch_idx: 5000 Process: 1 [320000/4710975.0 (7%)]
Train Epoch: 1 Batch_idx: 5000 Process: 2 [320000/4710975.0 (7%)]
Train Epoch: 1 Batch_idx: 5862 Process: 3 Data_length: 12 coststime: 0.04095172882080078
Train Epoch: 1 Batch_idx: 5862 Process: 0 Data_length: 3 coststime: 0.0751960277557373
Train Epoch: 1 Batch_idx: 5867 Process: 3 Data_length: 49 coststime: 0.0032558441162109375
Train Epoch: 1 Batch_idx: 5872 Process: 1 Data_length: 2 coststime: 0.022842884063720703
Train Epoch: 1 Batch_idx: 5876 Process: 3 Data_length: 63 coststime: 0.002694845199584961
Process 3 length: 5877 time: 2023-11-17 17:03:26.582317
Train epoch 1 costTime: 241.72063446044922s . Process 3 Start to test.
3 0 tensor(45508.8516, device='cuda:3')
3 100 tensor(45309.0469, device='cuda:3')
3 200 tensor(45675.3047, device='cuda:3')
3 300 tensor(45263.0273, device='cuda:3')
Process 3 Reduce metrics.
Train Epoch: 2 Batch_idx: 0 Process: 3 [0/4710975.0 (0%)]
Train Epoch: 1 Batch_idx: 5882 Process: 1 Data_length: 63 coststime: 0.05185818672180176
Train Epoch: 1 Batch_idx: 5887 Process: 1 Data_length: 12 coststime: 0.006895303726196289
Process 1 length: 5888 time: 2023-11-17 17:20:48.578204
Train epoch 1 costTime: 1285.7279663085938s . Process 1 Start to test.
1 0 tensor(45265.9141, device='cuda:1')
```
### Expected behavior
I'd like to know how to fix this problem.
### Environment info
```
torch==2.0
datasets==2.14.0
```
| null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6437/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6437/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6436/comments | https://api.github.com/repos/huggingface/datasets/issues/6436/events | https://github.com/huggingface/datasets/issues/6436 | 2,000,844,474 | I_kwDODunzps53Qna6 | 6,436 | TypeError: <lambda>() takes 0 positional arguments but 1 was given | {
"avatar_url": "https://avatars.githubusercontent.com/u/47111429?v=4",
"events_url": "https://api.github.com/users/ahmadmustafaanis/events{/privacy}",
"followers_url": "https://api.github.com/users/ahmadmustafaanis/followers",
"following_url": "https://api.github.com/users/ahmadmustafaanis/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmadmustafaanis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ahmadmustafaanis",
"id": 47111429,
"login": "ahmadmustafaanis",
"node_id": "MDQ6VXNlcjQ3MTExNDI5",
"organizations_url": "https://api.github.com/users/ahmadmustafaanis/orgs",
"received_events_url": "https://api.github.com/users/ahmadmustafaanis/received_events",
"repos_url": "https://api.github.com/users/ahmadmustafaanis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ahmadmustafaanis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmadmustafaanis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ahmadmustafaanis",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This looks like a problem with your environment rather than `datasets`.",
"I meet the same problem,\r\nand originally use\r\n```python\r\nlocale.getpreferredencoding = lambda : \"UTF-8\"\r\n```\r\nand change to\r\n```\r\nlocale.getpreferredencoding = lambda x: \"UTF-8\"\r\n```\r\nand it works."
] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-35-7b6becee3685>](https://localhost:8080/#) in <cell line: 1>()
----> 1 from datasets import Dataset
9 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
20 __version__ = "2.15.0"
21
---> 22 from .arrow_dataset import Dataset
23 from .arrow_reader import ReadInstruction
24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
61 import pyarrow.compute as pc
62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi
---> 63 from multiprocess import Pool
64 from requests import HTTPError
65
[/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module>
31
32 import sys
---> 33 from . import context
34
35 #
[/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module>
4
5 from . import process
----> 6 from . import reduction
7
8 __all__ = ()
[/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module>
14 import os
15 try:
---> 16 import dill as pickle
17 except ImportError:
18 import pickle
[/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module>
24
25
---> 26 from ._dill import (
27 dump, dumps, load, loads, copy,
28 Pickler, Unpickler, register, pickle, pickles, check,
[/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module>
166 try:
167 from _pyio import open as _open
--> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open)
169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open)
170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open)
[/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs)
154 def get_file_type(*args, **kwargs):
155 open = kwargs.pop("open", __builtin__.open)
--> 156 f = open(os.devnull, *args, **kwargs)
157 t = type(f)
158 f.close()
[/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener)
280 return result
281 encoding = text_encoding(encoding)
--> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering)
283 result = text
284 text.mode = mode
[/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through)
2043 encoding = "utf-8"
2044 else:
-> 2045 encoding = locale.getpreferredencoding(False)
2046
2047 if not isinstance(encoding, str):
TypeError: <lambda>() takes 0 positional arguments but 1 was given
```
or
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-36-652e886d387f>](https://localhost:8080/#) in <cell line: 1>()
----> 1 import datasets
9 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
20 __version__ = "2.15.0"
21
---> 22 from .arrow_dataset import Dataset
23 from .arrow_reader import ReadInstruction
24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
61 import pyarrow.compute as pc
62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi
---> 63 from multiprocess import Pool
64 from requests import HTTPError
65
[/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module>
31
32 import sys
---> 33 from . import context
34
35 #
[/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module>
4
5 from . import process
----> 6 from . import reduction
7
8 __all__ = ()
[/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module>
14 import os
15 try:
---> 16 import dill as pickle
17 except ImportError:
18 import pickle
[/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module>
24
25
---> 26 from ._dill import (
27 dump, dumps, load, loads, copy,
28 Pickler, Unpickler, register, pickle, pickles, check,
[/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module>
166 try:
167 from _pyio import open as _open
--> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open)
169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open)
170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open)
[/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs)
154 def get_file_type(*args, **kwargs):
155 open = kwargs.pop("open", __builtin__.open)
--> 156 f = open(os.devnull, *args, **kwargs)
157 t = type(f)
158 f.close()
[/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener)
280 return result
281 encoding = text_encoding(encoding)
--> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering)
283 result = text
284 text.mode = mode
[/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through)
2043 encoding = "utf-8"
2044 else:
-> 2045 encoding = locale.getpreferredencoding(False)
2046
2047 if not isinstance(encoding, str):
TypeError: <lambda>() takes 0 positional arguments but 1 was given
```
### Steps to reproduce the bug
`import datasets` on colab
### Expected behavior
work fine
### Environment info
colab
`!pip install datasets` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6436/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6436/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6435/comments | https://api.github.com/repos/huggingface/datasets/issues/6435/events | https://github.com/huggingface/datasets/issues/6435 | 2,000,690,513 | I_kwDODunzps53QB1R | 6,435 | Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"[This doc section](https://huggingface.co/docs/datasets/main/en/process#multiprocessing) explains how to modify the script to avoid this error.",
"@mariosasko thank you very much, i'll check it",
"@mariosasko no it does not\r\n\r\n`Dataset.filter() got an unexpected keyword argument 'with_rank'`"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
1. I ran dataset mapping with `num_proc=6` in it and got this error:
`RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method`
I can't actually find a way to run multi-GPU dataset mapping. Can you help?
### Steps to reproduce the bug
1. Rund SDXL training with `num_proc=6`: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py
### Expected behavior
Should work well
### Environment info
6x A100 SXM, Linux | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6435/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6435/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6432/comments | https://api.github.com/repos/huggingface/datasets/issues/6432/events | https://github.com/huggingface/datasets/issues/6432 | 1,999,258,140 | I_kwDODunzps53KkIc | 6,432 | load_dataset does not load all of the data in my input file | {
"avatar_url": "https://avatars.githubusercontent.com/u/121301001?v=4",
"events_url": "https://api.github.com/users/demongolem-biz2/events{/privacy}",
"followers_url": "https://api.github.com/users/demongolem-biz2/followers",
"following_url": "https://api.github.com/users/demongolem-biz2/following{/other_user}",
"gists_url": "https://api.github.com/users/demongolem-biz2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/demongolem-biz2",
"id": 121301001,
"login": "demongolem-biz2",
"node_id": "U_kgDOBzroCQ",
"organizations_url": "https://api.github.com/users/demongolem-biz2/orgs",
"received_events_url": "https://api.github.com/users/demongolem-biz2/received_events",
"repos_url": "https://api.github.com/users/demongolem-biz2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/demongolem-biz2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/demongolem-biz2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/demongolem-biz2",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"You should use `datasets.load_dataset` instead of `nlp.load_dataset`, as the `nlp` package is outdated.\r\n\r\nIf switching to `datasets.load_dataset` doesn't fix the issue, sharing the JSON file (feel free to replace the data with dummy data) would be nice so that we can reproduce it ourselves."
] | 1970-01-01T00:00:00.000001 | 1,700 | null | NONE | null | ### Describe the bug
I have 127 elements in my input dataset. When I do a len on the dataset after loaded, it is only 124 elements.
### Steps to reproduce the bug
train_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.VALIDATION)
logger.info(len(train_dataset))
logger.info(len(valid_dataset))
Both train and valid input are 127 items. However, they both only load 124 items. The input format is in json. At the end of the day, I am trying to create .pt files.
### Expected behavior
I see all 127 elements in my dataset when performing len
### Environment info
Python 3.10. CentOS operating system. nlp==0.40, datasets==2.14.5, transformers==4.26.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6432/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6432/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6422/comments | https://api.github.com/repos/huggingface/datasets/issues/6422/events | https://github.com/huggingface/datasets/issues/6422 | 1,994,579,267 | I_kwDODunzps524t1D | 6,422 | Allow to choose the `writer_batch_size` when using `save_to_disk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/38216711?v=4",
"events_url": "https://api.github.com/users/NathanGodey/events{/privacy}",
"followers_url": "https://api.github.com/users/NathanGodey/followers",
"following_url": "https://api.github.com/users/NathanGodey/following{/other_user}",
"gists_url": "https://api.github.com/users/NathanGodey/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NathanGodey",
"id": 38216711,
"login": "NathanGodey",
"node_id": "MDQ6VXNlcjM4MjE2NzEx",
"organizations_url": "https://api.github.com/users/NathanGodey/orgs",
"received_events_url": "https://api.github.com/users/NathanGodey/received_events",
"repos_url": "https://api.github.com/users/NathanGodey/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NathanGodey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NathanGodey/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NathanGodey",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"We have a config variable that controls the batch size in `save_to_disk`:\r\n```python\r\nimport datasets\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE = <smaller_batch_size>\r\n...\r\nds.save_to_disk(...)\r\n```",
"Thank you for your answer!\r\n\r\nFrom what I am reading in `https://github.com/huggingface/datasets/blob/2.14.5/src/datasets/arrow_dataset.py`, every function involved (`select`, `shard`, ...) has a default hardcoded batch size of 1000, as such:\r\n```python\r\ndef select(\r\n self,\r\n indices: Iterable,\r\n keep_in_memory: bool = False,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n ) -> \"Dataset\":\r\n...\r\n```\r\nThen, `ArrowWriter` is instantiated with the specified `writer_batch_size`. In `ArrowWriter`, `writer_batch_size` is set to `datasets.config.DEFAULT_MAX_BATCH_SIZE` if it is `None`(https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_writer.py#L345C14-L345C31). However, in our case, it is already set to 1000 by \"parent\" methods, so it won't happen.\r\n\r\nNevertheless, due to this: \r\n```python\r\ndef _save_to_disk_single(job_id: int, shard: \"Dataset\", fpath: str, storage_options: Optional[dict]):\r\n batch_size = config.DEFAULT_MAX_BATCH_SIZE\r\n...\r\n```\r\nit seems to work. I will use it as such, but it should maybe be added to documentation? And maybe improved in next versions?"
] | 1970-01-01T00:00:00.000001 | 1,700 | null | NONE | null | ### Feature request
Add an argument in `save_to_disk` regarding batch size, which would be passed to `shard` and other methods.
### Motivation
The `Dataset.save_to_disk` method currently calls `shard` without passing a `writer_batch_size` argument, thus implicitly using the default value (1000). This can result in RAM saturation when using a lot of processes on long text sequences or other modalities, or for specific IO configs.
### Your contribution
I would be glad to submit a PR, as long as it does not imply extensive tests refactoring. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6422/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6422/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6417/comments | https://api.github.com/repos/huggingface/datasets/issues/6417/events | https://github.com/huggingface/datasets/issues/6417 | 1,993,149,416 | I_kwDODunzps52zQvo | 6,417 | Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/57496007?v=4",
"events_url": "https://api.github.com/users/Davo00/events{/privacy}",
"followers_url": "https://api.github.com/users/Davo00/followers",
"following_url": "https://api.github.com/users/Davo00/following{/other_user}",
"gists_url": "https://api.github.com/users/Davo00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Davo00",
"id": 57496007,
"login": "Davo00",
"node_id": "MDQ6VXNlcjU3NDk2MDA3",
"organizations_url": "https://api.github.com/users/Davo00/orgs",
"received_events_url": "https://api.github.com/users/Davo00/received_events",
"repos_url": "https://api.github.com/users/Davo00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Davo00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Davo00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Davo00",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Very strange: `datasets-cli env`\r\n> \r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.9.0\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.13\r\n> - PyArrow version: 8.0.0\r\n> - Pandas version: 1.3.5\r\n\r\nAfter updating datasets and pyarrow on base environment, although I am using a different one called layoutLM\r\n\r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.14.6\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.18\r\n> - Huggingface_hub version: 0.17.3\r\n> - PyArrow version: 14.0.1\r\n> - Pandas version: 2.1.3",
"Hi! The latest (patch) release (published a few hours ago) includes a fix for this [PyArrow security issue](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). To install it, run `pip install -U datasets`.",
"> Hi! The latest (patch) release (published a few hours ago) includes a fix for this [PyArrow security issue](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). To install it, run `pip install -U datasets`.\r\n\r\nThanks for the info and the latest release, it seems this has also solved my issue. First run after the update worked and I am training right now :D\r\nWill close the Issu"
] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab.
**Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb
**Error**: `ValueError: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.`
**Caused by**:
```
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
```
### Steps to reproduce the bug
Run the notebook provided, locally. If possible also on M1.
### Expected behavior
The cell where features are mapped to Array2D and Array3D should work without any issues.
### Environment info
Tried with Python 3.9 and 3.10 conda envs. Running Mac M1.
`pip show datasets`
> Name: datasets
Version: 2.14.6
Summary: HuggingFace community-driven open-source library of datasets
`pip list`
> Package Version
> ------------------------- ------------
> accelerate 0.24.1
> aiohttp 3.8.6
> aiosignal 1.3.1
> anyio 3.5.0
> appnope 0.1.2
> argon2-cffi 21.3.0
> argon2-cffi-bindings 21.2.0
> asttokens 2.0.5
> async-timeout 4.0.3
> attrs 23.1.0
> backcall 0.2.0
> beautifulsoup4 4.12.2
> bleach 4.1.0
> certifi 2023.7.22
> cffi 1.15.1
> charset-normalizer 3.3.2
> comm 0.1.2
> datasets 2.14.6
> debugpy 1.6.7
> decorator 5.1.1
> defusedxml 0.7.1
> dill 0.3.7
> entrypoints 0.4
> exceptiongroup 1.0.4
> executing 0.8.3
> fastjsonschema 2.16.2
> filelock 3.13.1
> frozenlist 1.4.0
> fsspec 2023.10.0
> huggingface-hub 0.17.3
> idna 3.4
> importlib-metadata 6.0.0
> IProgress 0.4
> ipykernel 6.25.0
> ipython 8.15.0
> ipython-genutils 0.2.0
> jedi 0.18.1
> Jinja2 3.1.2
> joblib 1.3.2
> jsonschema 4.19.2
> jsonschema-specifications 2023.7.1
> jupyter_client 7.4.9
> jupyter_core 5.5.0
> jupyter-server 1.23.4
> jupyterlab-pygments 0.1.2
> MarkupSafe 2.1.1
> matplotlib-inline 0.1.6
> mistune 2.0.4
> mpmath 1.3.0
> multidict 6.0.4
> multiprocess 0.70.15
> nbclassic 1.0.0
> nbclient 0.8.0
> nbconvert 7.10.0
> nbformat 5.9.2
> nest-asyncio 1.5.6
> networkx 3.2.1
> notebook 6.5.4
> notebook_shim 0.2.3
> numpy 1.26.1
> packaging 23.1
> pandas 2.1.3
> pandocfilters 1.5.0
> parso 0.8.3
> pexpect 4.8.0
> pickleshare 0.7.5
> Pillow 10.1.0
> pip 23.3
> platformdirs 3.10.0
> prometheus-client 0.14.1
> prompt-toolkit 3.0.36
> psutil 5.9.0
> ptyprocess 0.7.0
> pure-eval 0.2.2
> pyarrow 14.0.1
> pycparser 2.21
> Pygments 2.15.1
> python-dateutil 2.8.2
> pytz 2023.3.post1
> PyYAML 6.0.1
> pyzmq 23.2.0
> referencing 0.30.2
> regex 2023.10.3
> requests 2.31.0
> rpds-py 0.10.6
> safetensors 0.4.0
> scikit-learn 1.3.2
> scipy 1.11.3
> Send2Trash 1.8.2
> seqeval 1.2.2
> setuptools 68.0.0
> six 1.16.0
> sniffio 1.2.0
> soupsieve 2.5
> stack-data 0.2.0
> sympy 1.12
> terminado 0.17.1
> threadpoolctl 3.2.0
> tinycss2 1.2.1
> tokenizers 0.14.1
> torch 2.1.0
> tornado 6.3.3
> tqdm 4.66.1
> traitlets 5.7.1
> transformers 4.36.0.dev0
> typing_extensions 4.7.1
> tzdata 2023.3
> urllib3 2.0.7
> wcwidth 0.2.5
> webencodings 0.5.1
> websocket-client 0.58.0
> wheel 0.41.2
> xxhash 3.4.1
> yarl 1.9.2
> zipp 3.11.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/57496007?v=4",
"events_url": "https://api.github.com/users/Davo00/events{/privacy}",
"followers_url": "https://api.github.com/users/Davo00/followers",
"following_url": "https://api.github.com/users/Davo00/following{/other_user}",
"gists_url": "https://api.github.com/users/Davo00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Davo00",
"id": 57496007,
"login": "Davo00",
"node_id": "MDQ6VXNlcjU3NDk2MDA3",
"organizations_url": "https://api.github.com/users/Davo00/orgs",
"received_events_url": "https://api.github.com/users/Davo00/received_events",
"repos_url": "https://api.github.com/users/Davo00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Davo00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Davo00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Davo00",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6417/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6417/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6412/comments | https://api.github.com/repos/huggingface/datasets/issues/6412/events | https://github.com/huggingface/datasets/issues/6412 | 1,992,401,594 | I_kwDODunzps52waK6 | 6,412 | User token is printed out! | {
"avatar_url": "https://avatars.githubusercontent.com/u/25702692?v=4",
"events_url": "https://api.github.com/users/mohsen-goodarzi/events{/privacy}",
"followers_url": "https://api.github.com/users/mohsen-goodarzi/followers",
"following_url": "https://api.github.com/users/mohsen-goodarzi/following{/other_user}",
"gists_url": "https://api.github.com/users/mohsen-goodarzi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mohsen-goodarzi",
"id": 25702692,
"login": "mohsen-goodarzi",
"node_id": "MDQ6VXNlcjI1NzAyNjky",
"organizations_url": "https://api.github.com/users/mohsen-goodarzi/orgs",
"received_events_url": "https://api.github.com/users/mohsen-goodarzi/received_events",
"repos_url": "https://api.github.com/users/mohsen-goodarzi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mohsen-goodarzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohsen-goodarzi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mohsen-goodarzi",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Indeed, this is not a good practice. I've opened a PR that removes the token value from the (deprecation) warning."
] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | This line prints user token on command line! Is it safe?
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6412/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6410/comments | https://api.github.com/repos/huggingface/datasets/issues/6410/events | https://github.com/huggingface/datasets/issues/6410 | 1,992,100,209 | I_kwDODunzps52vQlx | 6,410 | Datasets does not load HuggingFace Repository properly | {
"avatar_url": "https://avatars.githubusercontent.com/u/40600201?v=4",
"events_url": "https://api.github.com/users/MikeDoes/events{/privacy}",
"followers_url": "https://api.github.com/users/MikeDoes/followers",
"following_url": "https://api.github.com/users/MikeDoes/following{/other_user}",
"gists_url": "https://api.github.com/users/MikeDoes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MikeDoes",
"id": 40600201,
"login": "MikeDoes",
"node_id": "MDQ6VXNlcjQwNjAwMjAx",
"organizations_url": "https://api.github.com/users/MikeDoes/orgs",
"received_events_url": "https://api.github.com/users/MikeDoes/received_events",
"repos_url": "https://api.github.com/users/MikeDoes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MikeDoes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MikeDoes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MikeDoes",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi! You can avoid the error by requesting only the `jsonl` files. `dataset = load_dataset(\"ai4privacy/pii-masking-200k\", data_files=[\"*.jsonl\"])`.\r\n\r\nOur data file inference does not filter out (incompatible) `json` files because `json` and `jsonl` use the same builder. Still, I think the inference should differentiate these extensions because it's safe to assume that loading them together will lead to an error. WDYT @lhoestq? ",
"Raising an error if there is a mix of json and jsonl in the builder makes sense yea"
] | 1970-01-01T00:00:00.000001 | 1,700 | null | NONE | null | ### Describe the bug
Dear Datasets team,
We just have published a dataset on Huggingface:
https://huggingface.co/ai4privacy
However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me know and we would be more than happy to adapt the structure of the repository or meta data so it works easier:
```python
from datasets import load_dataset
dataset = load_dataset("ai4privacy/pii-masking-200k")
```
```
Downloading readme: 100%
11.8k/11.8k [00:00<00:00, 512kB/s]
Downloading data files: 100%
1/1 [00:11<00:00, 11.16s/it]
Downloading data: 100%
64.3M/64.3M [00:02<00:00, 32.9MB/s]
Downloading data: 100%
113M/113M [00:03<00:00, 35.0MB/s]
Downloading data: 100%
97.7M/97.7M [00:02<00:00, 46.1MB/s]
Downloading data: 100%
90.8M/90.8M [00:02<00:00, 44.9MB/s]
Downloading data: 100%
7.63k/7.63k [00:00<00:00, 41.0kB/s]
Downloading data: 100%
1.03k/1.03k [00:00<00:00, 9.44kB/s]
Extracting data files: 100%
1/1 [00:00<00:00, 29.26it/s]
Generating train split:
209261/0 [00:05<00:00, 41201.25 examples/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1939 )
-> 1940 writer.write_table(table)
1941 num_examples_progress_update += len(table)
8 frames
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size)
571 pa_table = pa_table.combine_chunks()
--> 572 pa_table = table_cast(pa_table, self._schema)
573 if self.embed_local_files:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema)
2327 if table.schema != schema:
-> 2328 return cast_table_to_schema(table, schema)
2329 elif table.schema.metadata != schema.metadata:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema)
2285 if sorted(table.column_names) != sorted(features):
-> 2286 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2287 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
ValueError: Couldn't cast
JOBTYPE: int64
PHONEIMEI: int64
ACCOUNTNAME: int64
VEHICLEVIN: int64
GENDER: int64
CURRENCYCODE: int64
CREDITCARDISSUER: int64
JOBTITLE: int64
SEX: int64
CURRENCYSYMBOL: int64
IP: int64
EYECOLOR: int64
MASKEDNUMBER: int64
SECONDARYADDRESS: int64
JOBAREA: int64
ACCOUNTNUMBER: int64
language: string
BITCOINADDRESS: int64
MAC: int64
SSN: int64
EMAIL: int64
ETHEREUMADDRESS: int64
DOB: int64
VEHICLEVRM: int64
IPV6: int64
AMOUNT: int64
URL: int64
PHONENUMBER: int64
PIN: int64
TIME: int64
CREDITCARDNUMBER: int64
FIRSTNAME: int64
IBAN: int64
BIC: int64
COUNTY: int64
STATE: int64
LASTNAME: int64
ZIPCODE: int64
HEIGHT: int64
ORDINALDIRECTION: int64
MIDDLENAME: int64
STREET: int64
USERNAME: int64
CURRENCY: int64
PREFIX: int64
USERAGENT: int64
CURRENCYNAME: int64
LITECOINADDRESS: int64
CREDITCARDCVV: int64
AGE: int64
CITY: int64
PASSWORD: int64
BUILDINGNUMBER: int64
IPV4: int64
NEARBYGPSCOORDINATE: int64
DATE: int64
COMPANYNAME: int64
to
{'masked_text': Value(dtype='string', id=None), 'unmasked_text': Value(dtype='string', id=None), 'privacy_mask': Value(dtype='string', id=None), 'span_labels': Value(dtype='string', id=None), 'bio_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'tokenised_text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-2-f1c6811e9c83>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("ai4privacy/pii-masking-200k")
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2151
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1047 try:
1048 # Prepare split will record examples associated to the split
-> 1049 self._prepare_split(split_generator, **prepare_split_kwargs)
1050 except OSError as e:
1051 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1811 job_id = 0
1812 with pbar:
-> 1813 for job_id, done, content in self._prepare_split_single(
1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1815 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1957 e = e.__context__
-> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1959
1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
Thank you and have a great day ahead
### Steps to reproduce the bug
Open Google Colab Notebook:
Run command:
!pip3 install datasets
Run code:
from datasets import load_dataset
dataset = load_dataset("ai4privacy/pii-masking-200k")
### Expected behavior
Download the dataset successfully from HuggingFace to the notebook so that we can start working with it
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6410/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6410/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6409/comments | https://api.github.com/repos/huggingface/datasets/issues/6409/events | https://github.com/huggingface/datasets/issues/6409 | 1,991,960,865 | I_kwDODunzps52uukh | 6,409 | using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception | {
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neiblegy",
"id": 16574677,
"login": "neiblegy",
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"organizations_url": "https://api.github.com/users/neiblegy/orgs",
"received_events_url": "https://api.github.com/users/neiblegy/received_events",
"repos_url": "https://api.github.com/users/neiblegy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neiblegy",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows:
`AttributeError: 'function' object has no attribute 'close'
Exception ignored in: <function TqdmCallback.__del__ at 0x7fa8683d84c0>
Traceback (most recent call last):
File "/home/protoss.gao/.local/lib/python3.9/site-packages/fsspec/callbacks.py", line 233, in __del__
self.tqdm.close()`
i check your source code in datasets/utils/file_utils.py:348 you define TqdmCallback derive from fsspec.callbacks.TqdmCallback
but in the newest fsspec code [https://github.com/fsspec/filesystem_spec/blob/master/fsspec/callbacks.py](url) , line 146, in this case, _DEFAULT_CALLBACK will take effect, but in line 234, it calls "close()" function which _DEFAULT_CALLBACK don't have such thing.
so i think the class "TqdmCallback" in datasets/utils/file_utils.py may override "__del__" function or report this bug to fsspec.
### Steps to reproduce the bug
as i said
### Expected behavior
no exception
### Environment info
datasets: 2.14.4
python: 3.9
platform: x86_64 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6409/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6409/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6408/comments | https://api.github.com/repos/huggingface/datasets/issues/6408/events | https://github.com/huggingface/datasets/issues/6408 | 1,991,902,972 | I_kwDODunzps52ugb8 | 6,408 | `IterableDataset` lost but not keep columns when map function adding columns with names in `remove_columns` | {
"avatar_url": "https://avatars.githubusercontent.com/u/24571857?v=4",
"events_url": "https://api.github.com/users/shmily326/events{/privacy}",
"followers_url": "https://api.github.com/users/shmily326/followers",
"following_url": "https://api.github.com/users/shmily326/following{/other_user}",
"gists_url": "https://api.github.com/users/shmily326/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shmily326",
"id": 24571857,
"login": "shmily326",
"node_id": "MDQ6VXNlcjI0NTcxODU3",
"organizations_url": "https://api.github.com/users/shmily326/orgs",
"received_events_url": "https://api.github.com/users/shmily326/received_events",
"repos_url": "https://api.github.com/users/shmily326/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shmily326/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shmily326/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shmily326",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,700 | null | NONE | null | ### Describe the bug
IterableDataset lost but not keep columns when map function adding columns with names in remove_columns,
Dataset not.
May be related to the code below:
https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756
### Steps to reproduce the bug
```python
dataset: IterableDataset = load_dataset("Anthropic/hh-rlhf", streaming=True, split="train")
column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected']
# map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx}
dataset = dataset.map(map_fn, batched=True, remove_columns=column_names)
next(iter(dataset))
# output
# {'prompt': 'xxx, 'history': xxx}
```
```python
# when load_dataset with streaming=False, the column_names are kept:
dataset: Dataset = load_dataset("Anthropic/hh-rlhf", streaming=False, split="train")
column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected']
# map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx}
dataset = dataset.map(map_fn, batched=True, remove_columns=column_names)
next(iter(dataset))
# output
# {'prompt': 'xxx, 'history': xxx, "chosen": xxx, "rejected": xxx}
```
### Expected behavior
IterableDataset keep columns when map function adding columns with names in remove_columns
### Environment info
datasets==2.14.6 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6408/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6408/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6407/comments | https://api.github.com/repos/huggingface/datasets/issues/6407/events | https://github.com/huggingface/datasets/issues/6407 | 1,991,514,079 | I_kwDODunzps52tBff | 6,407 | Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object" | {
"avatar_url": "https://avatars.githubusercontent.com/u/1741779?v=4",
"events_url": "https://api.github.com/users/eawer/events{/privacy}",
"followers_url": "https://api.github.com/users/eawer/followers",
"following_url": "https://api.github.com/users/eawer/following{/other_user}",
"gists_url": "https://api.github.com/users/eawer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eawer",
"id": 1741779,
"login": "eawer",
"node_id": "MDQ6VXNlcjE3NDE3Nzk=",
"organizations_url": "https://api.github.com/users/eawer/orgs",
"received_events_url": "https://api.github.com/users/eawer/received_events",
"repos_url": "https://api.github.com/users/eawer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eawer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eawer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eawer",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I have encountered the same problem with `datasets-2.20.0`. \r\n\r\nI found the following workaround for this issue (including the fix from #6598):\r\n1. specify the AWS profile name in the `storage_options` instead of passing an existing session object\r\n2. use a custom `DownloadConfig` object to fix #6598\r\n3. pass the `storage_options` to the `DownloadConfig`\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\n\r\n# Fix for DownloadConfig from https://github.com/huggingface/datasets/issues/6598#issuecomment-1986699619\r\nclass ReviseDownloadConfig(DownloadConfig):\r\n def __post_init__(self, use_auth_token):\r\n if use_auth_token != \"deprecated\":\r\n warnings.warn(\r\n \"'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\\n\"\r\n f\"You can remove this warning by passing 'token={use_auth_token}' instead.\",\r\n FutureWarning,\r\n )\r\n self.token = use_auth_token\r\n\r\nstorage_options={\"profile\": \"my-aws-profile-name\"}\r\n\r\nds = load_dataset(\r\n \"parquet\", \r\n data_files={\"train\": DATA_PATH}, \r\n storage_options=storage_options,\r\n download_config=ReviseDownloadConfig(storage_options=storage_options)\r\n)\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,722 | null | NONE | null | ### Describe the bug
I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error
I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bucket for obvious reasons, but I'll try to give all possible outputs.
### Steps to reproduce the bug
```python
import s3fs
from datasets import load_dataset
from aiobotocore.session import get_session
DATA_PATH = "s3://bucket_name/path/validation.parquet"
fs = s3fs.S3FileSystem(session=get_session())
```
`fs.stat` returns the data, so we can say that fs is working and we have all permissions
```python
fs.stat(DATA_PATH)
# Returns:
# {'ETag': '"123123a-19"',
# 'LastModified': datetime.datetime(2023, 11, 1, 10, 16, 57, tzinfo=tzutc()),
# 'size': 312237170,
# 'name': 'bucket_name/path/validation.parquet',
# 'type': 'file',
# 'StorageClass': 'STANDARD',
# 'VersionId': 'Abc.HtmsC9h.as',
# 'ContentType': 'binary/octet-stream'}
```
```python
fs.storage_options
# Returns:
# {'session': <aiobotocore.session.AioSession at 0x7f9193fa53c0>}
```
```python
ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options)
```
<details>
<summary>Returns such error (expandable)</summary>
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[88], line 1
----> 1 ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options)
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/load.py:2153, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
2156 verification_mode=verification_mode,
2157 try_from_hf_gcs=try_from_hf_gcs,
2158 num_proc=num_proc,
2159 storage_options=storage_options,
2160 )
2162 # Build dataset for splits
2163 keep_in_memory = (
2164 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2165 )
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
957 **prepare_split_kwargs,
958 **download_and_prepare_kwargs,
959 )
960 # Sync info
961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1025 split_dict = SplitDict(dataset_name=self.dataset_name)
1026 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
-> 1027 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
1029 # Checksums verification
1030 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)
32 if not self.config.data_files:
33 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)
35 if isinstance(data_files, (str, list, tuple)):
36 files = data_files
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:565, in DownloadManager.download_and_extract(self, url_or_urls)
549 def download_and_extract(self, url_or_urls):
550 """Download and extract given `url_or_urls`.
551
552 Is roughly equivalent to:
(...)
563 extracted_path(s): `str`, extracted paths of given URL(s).
564 """
--> 565 return self.extract(self.download(url_or_urls))
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:420, in DownloadManager.download(self, url_or_urls)
401 def download(self, url_or_urls):
402 """Download given URL(s).
403
404 By default, only one process is used for download. Pass customized `download_config.num_proc` to change this behavior.
(...)
418 ```
419 """
--> 420 download_config = self.download_config.copy()
421 download_config.extract_compressed_file = False
422 if download_config.download_desc is None:
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in DownloadConfig.copy(self)
93 def copy(self) -> "DownloadConfig":
---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in <dictcomp>(.0)
93 def copy(self) -> "DownloadConfig":
---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (2 times), deepcopy at line 146 (2 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: deepcopy at line 146 (1 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:206, in _deepcopy_list(x, memo, deepcopy)
204 append = y.append
205 for a in x:
--> 206 append(deepcopy(a, memo))
207 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:238, in _deepcopy_method(x, memo)
237 def _deepcopy_method(x, memo): # Copy instance methods
--> 238 return type(x)(x.__func__, deepcopy(x.__self__, memo))
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (3 times), deepcopy at line 146 (3 times), deepcopy at line 172 (3 times), _reconstruct at line 271 (2 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (1 times), deepcopy at line 146 (1 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:265, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
263 if deep and args:
264 args = (deepcopy(arg, memo) for arg in args)
--> 265 y = func(*args)
266 if deep:
267 memo[id(x)] = y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:264, in <genexpr>(.0)
262 deep = memo is not None
263 if deep and args:
--> 264 args = (deepcopy(arg, memo) for arg in args)
265 y = func(*args)
266 if deep:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:161, in deepcopy(x, memo, _nil)
159 reductor = getattr(x, "__reduce_ex__", None)
160 if reductor is not None:
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "__reduce__", None)
TypeError: cannot pickle '_contextvars.Context' object
```
</details>
### Expected behavior
If I choose to load the file from the public bucket with `anon=True` passed - everything works, so I expected loading from the private bucket to work as well
### Environment info
- `datasets` version: 2.14.6
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.19.1
- PyArrow version: 14.0.1
- Pandas version: 1.5.3
- s3fs version: 2023.10.0
- fsspec version: 2023.10.0
- aiobotocore version: 2.7.0 | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6407/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6406/comments | https://api.github.com/repos/huggingface/datasets/issues/6406/events | https://github.com/huggingface/datasets/issues/6406 | 1,990,469,045 | I_kwDODunzps52pCW1 | 6,406 | CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,699 | 1970-01-01T00:00:00.000001 | MEMBER | null | Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390
```
ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6406/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6405/comments | https://api.github.com/repos/huggingface/datasets/issues/6405/events | https://github.com/huggingface/datasets/issues/6405 | 1,990,358,743 | I_kwDODunzps52onbX | 6,405 | ConfigNamesError on a simple CSV file | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"The viewer is working now. \r\n\r\nBased on the repo commit history, the bug was due to the incorrect format of the `features` field in the README YAML (`Value` requires `dtype`, e.g., `Value(\"string\")`, but it was not specified)",
"Feel free to close the issue",
"Oh, OK! Thanks. So, there was no reason to open an issue"
] | 1970-01-01T00:00:00.000001 | 1,699 | 1970-01-01T00:00:00.000001 | COLLABORATOR | null | See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1
```
Error code: ConfigNamesError
Exception: TypeError
Message: __init__() missing 1 required positional argument: 'dtype'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1039, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 468, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 399, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1838, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1690, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1353, in generate_from_dict
return class_type(**{k: v for k, v in obj.items() if k in field_names})
TypeError: __init__() missing 1 required positional argument: 'dtype'
```
This is the CSV file: https://huggingface.co/datasets/Nguyendo1999/mmath/blob/dbcdd7c2c6fc447f852ec136a7532292802bb46f/math_train.csv | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6405/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6405/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6403/comments | https://api.github.com/repos/huggingface/datasets/issues/6403/events | https://github.com/huggingface/datasets/issues/6403 | 1,990,098,817 | I_kwDODunzps52nn-B | 6,403 | Cannot import datasets on google colab (python 3.10.12) | {
"avatar_url": "https://avatars.githubusercontent.com/u/15389235?v=4",
"events_url": "https://api.github.com/users/nabilaannisa/events{/privacy}",
"followers_url": "https://api.github.com/users/nabilaannisa/followers",
"following_url": "https://api.github.com/users/nabilaannisa/following{/other_user}",
"gists_url": "https://api.github.com/users/nabilaannisa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nabilaannisa",
"id": 15389235,
"login": "nabilaannisa",
"node_id": "MDQ6VXNlcjE1Mzg5MjM1",
"organizations_url": "https://api.github.com/users/nabilaannisa/orgs",
"received_events_url": "https://api.github.com/users/nabilaannisa/received_events",
"repos_url": "https://api.github.com/users/nabilaannisa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nabilaannisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nabilaannisa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nabilaannisa",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"You are most likely using an outdated version of `datasets` in the notebook, which can be verified with the `!datasets-cli env` command. You can run `!pip install -U datasets` to update the installation.",
"okay, it works! thank you so much! 😄 "
] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12)

I found the same problem that have been solved in [#3326 ] but it seem still error on the google colab. I can't try on my local using jupyter notebook because of my laptop resource doesn't fulfill the requirements.
Please can anyone help me solve this problem. Thank you 😅
### Steps to reproduce the bug
Error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-8-b6e092f83978>](https://localhost:8080/#) in <cell line: 1>()
----> 1 from datasets import load_dataset
2
3 # Print all the available datasets
4 from huggingface_hub import list_datasets
5 print([dataset.id for dataset in list_datasets()])
6 frames
[/usr/lib/python3.10/functools.py](https://localhost:8080/#) in update_wrapper(wrapper, wrapped, assigned, updated)
59 # Issue #17482: set __wrapped__ last so we don't inadvertently copy it
60 # from the wrapped function when updating __dict__
---> 61 wrapper.__wrapped__ = wrapped
62 # Return the wrapper so this can be used as a decorator via partial()
63 return wrapper
AttributeError: readonly attribute
```
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free | {
"avatar_url": "https://avatars.githubusercontent.com/u/15389235?v=4",
"events_url": "https://api.github.com/users/nabilaannisa/events{/privacy}",
"followers_url": "https://api.github.com/users/nabilaannisa/followers",
"following_url": "https://api.github.com/users/nabilaannisa/following{/other_user}",
"gists_url": "https://api.github.com/users/nabilaannisa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nabilaannisa",
"id": 15389235,
"login": "nabilaannisa",
"node_id": "MDQ6VXNlcjE1Mzg5MjM1",
"organizations_url": "https://api.github.com/users/nabilaannisa/orgs",
"received_events_url": "https://api.github.com/users/nabilaannisa/received_events",
"repos_url": "https://api.github.com/users/nabilaannisa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nabilaannisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nabilaannisa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nabilaannisa",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6403/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6403/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6401/comments | https://api.github.com/repos/huggingface/datasets/issues/6401/events | https://github.com/huggingface/datasets/issues/6401 | 1,988,710,061 | I_kwDODunzps52iU6t | 6,401 | dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working | {
"avatar_url": "https://avatars.githubusercontent.com/u/47074021?v=4",
"events_url": "https://api.github.com/users/userbox020/events{/privacy}",
"followers_url": "https://api.github.com/users/userbox020/followers",
"following_url": "https://api.github.com/users/userbox020/following{/other_user}",
"gists_url": "https://api.github.com/users/userbox020/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/userbox020",
"id": 47074021,
"login": "userbox020",
"node_id": "MDQ6VXNlcjQ3MDc0MDIx",
"organizations_url": "https://api.github.com/users/userbox020/orgs",
"received_events_url": "https://api.github.com/users/userbox020/received_events",
"repos_url": "https://api.github.com/users/userbox020/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/userbox020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/userbox020/subscriptions",
"type": "User",
"url": "https://api.github.com/users/userbox020",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Seems like it's a problem with the dataset, since in the [README](https://huggingface.co/datasets/Hyperspace-Technologies/scp-wiki-text/blob/main/README.md) the validation is not specified. Try cloning the dataset, removing the README (or validation split), and loading it locally/ ",
"@VarunNSrivastava thanks brother, working beautiful now\r\n\r\n```\r\nC:\\_Work\\_datasets>py dataset.py\r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████| 3/3 [00:00<?, ?it/s]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 599.90it/s]\r\nGenerating train split: 314294 examples [00:00, 1293222.03 examples/s]\r\nGenerating validation split: 120 examples [00:00, 59053.91 examples/s]\r\nGenerating test split: 34922 examples [00:00, 1343275.84 examples/s]\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
```
(datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py
Downloading readme: 100%|███████████████████████████████████| 360/360 [00:00<00:00, 2.16MB/s]
Downloading data: 100%|█████████████████████████████████| 65.1M/65.1M [00:19<00:00, 3.38MB/s]
Downloading data: 100%|█████████████████████████████████| 6.35k/6.35k [00:00<00:00, 20.7kB/s]
Downloading data: 100%|█████████████████████████████████| 7.29M/7.29M [00:01<00:00, 3.99MB/s]
Downloading data files: 100%|██████████████████████████████████| 3/3 [00:21<00:00, 7.14s/it]
Extracting data files: 100%|█████████████████████████████████| 3/3 [00:00<00:00, 1624.23it/s]
Generating train split: 100%|█████████████| 314294/314294 [00:00<00:00, 668186.58 examples/s]
Generating validation split: 120 examples [00:00, 100422.28 examples/s]
Generating test split: 100%|████████████████| 34922/34922 [00:00<00:00, 754683.41 examples/s]
Traceback (most recent call last):
File "/media/10TB_HHD/_LLM_DATASETS/dataset.py", line 3, in <module>
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text")
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/load.py", line 2153, in load_dataset
builder_instance.download_and_prepare(
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 1067, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 93, in verify_splits
raise UnexpectedSplits(str(set(recorded_splits) - set(expected_splits)))
datasets.utils.info_utils.UnexpectedSplits: {'validation'}
```
### Steps to reproduce the bug
Name:
`dataset.py`
Code:
```
from datasets import load_dataset
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text")
```
### Expected behavior
Run without errors
### Environment info
```
name: datasets
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2023.08.22=h06a4308_0
- ld_impl_linux-64=2.38=h1181459_1
- libffi=3.4.4=h6a678d5_0
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- libuuid=1.41.5=h5eee18b_0
- ncurses=6.4=h6a678d5_0
- openssl=3.0.12=h7f8727e_0
- python=3.10.13=h955ad1f_0
- readline=8.2=h5eee18b_0
- setuptools=68.0.0=py310h06a4308_0
- sqlite=3.41.2=h5eee18b_0
- tk=8.6.12=h1ccaba5_0
- wheel=0.41.2=py310h06a4308_0
- xz=5.4.2=h5eee18b_0
- zlib=1.2.13=h5eee18b_0
- pip:
- aiohttp==3.8.6
- aiosignal==1.3.1
- async-timeout==4.0.3
- attrs==23.1.0
- certifi==2023.7.22
- charset-normalizer==3.3.2
- click==8.1.7
- datasets==2.14.6
- dill==0.3.7
- filelock==3.13.1
- frozenlist==1.4.0
- fsspec==2023.10.0
- huggingface-hub==0.19.0
- idna==3.4
- multidict==6.0.4
- multiprocess==0.70.15
- numpy==1.26.1
- openai==0.27.8
- packaging==23.2
- pandas==2.1.3
- pip==23.3.1
- platformdirs==4.0.0
- pyarrow==14.0.1
- python-dateutil==2.8.2
- pytz==2023.3.post1
- pyyaml==6.0.1
- requests==2.31.0
- six==1.16.0
- tomli==2.0.1
- tqdm==4.66.1
- typer==0.9.0
- typing-extensions==4.8.0
- tzdata==2023.3
- urllib3==2.0.7
- xxhash==3.4.1
- yarl==1.9.2
prefix: /home/mruserbox/miniconda3/envs/datasets
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47074021?v=4",
"events_url": "https://api.github.com/users/userbox020/events{/privacy}",
"followers_url": "https://api.github.com/users/userbox020/followers",
"following_url": "https://api.github.com/users/userbox020/following{/other_user}",
"gists_url": "https://api.github.com/users/userbox020/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/userbox020",
"id": 47074021,
"login": "userbox020",
"node_id": "MDQ6VXNlcjQ3MDc0MDIx",
"organizations_url": "https://api.github.com/users/userbox020/orgs",
"received_events_url": "https://api.github.com/users/userbox020/received_events",
"repos_url": "https://api.github.com/users/userbox020/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/userbox020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/userbox020/subscriptions",
"type": "User",
"url": "https://api.github.com/users/userbox020",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6401/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6401/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6400/comments | https://api.github.com/repos/huggingface/datasets/issues/6400/events | https://github.com/huggingface/datasets/issues/6400 | 1,988,571,317 | I_kwDODunzps52hzC1 | 6,400 | Safely load datasets by disabling execution of dataset loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/14367635?v=4",
"events_url": "https://api.github.com/users/irenedea/events{/privacy}",
"followers_url": "https://api.github.com/users/irenedea/followers",
"following_url": "https://api.github.com/users/irenedea/following{/other_user}",
"gists_url": "https://api.github.com/users/irenedea/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/irenedea",
"id": 14367635,
"login": "irenedea",
"node_id": "MDQ6VXNlcjE0MzY3NjM1",
"organizations_url": "https://api.github.com/users/irenedea/orgs",
"received_events_url": "https://api.github.com/users/irenedea/received_events",
"repos_url": "https://api.github.com/users/irenedea/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/irenedea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/irenedea/subscriptions",
"type": "User",
"url": "https://api.github.com/users/irenedea",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null | [
"great idea IMO\r\n\r\nthis could be a `trust_remote_code=True` flag like in transformers. We could also default to loading the Parquet conversion rather than executing code (for dataset repos that have both)",
"@julien-c that would be great!",
"We added the `trust_remote_code` argument to `load_dataset()` in `datasets` 2.16:\r\n- in the future users will have to pass trust_remote_code=True to use datasets with a script\r\n- for now we just show a warning when a dataset script is used\r\n- we fallback on the Hugging Face Parquet exports when possible (to keep compatibility with old datasets with scripts)\r\n\r\nSo feel free to use `trust_remote_code=False` in the meantime to disable loading from dataset loading scripts :)",
"Passing `trust_remote_code=True` explicitly is now mandatory to load a dataset with a script since https://github.com/huggingface/datasets/pull/6954"
] | 1970-01-01T00:00:00.000001 | 1,718 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution.
Any suggested workarounds are welcome as well.
### Motivation
This is a security vulnerability that could lead to arbitrary code execution.
### Your contribution
n/a | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6400/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6400/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6399/comments | https://api.github.com/repos/huggingface/datasets/issues/6399/events | https://github.com/huggingface/datasets/issues/6399 | 1,988,368,503 | I_kwDODunzps52hBh3 | 6,399 | TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array | {
"avatar_url": "https://avatars.githubusercontent.com/u/76236359?v=4",
"events_url": "https://api.github.com/users/y-hwang/events{/privacy}",
"followers_url": "https://api.github.com/users/y-hwang/followers",
"following_url": "https://api.github.com/users/y-hwang/following{/other_user}",
"gists_url": "https://api.github.com/users/y-hwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/y-hwang",
"id": 76236359,
"login": "y-hwang",
"node_id": "MDQ6VXNlcjc2MjM2MzU5",
"organizations_url": "https://api.github.com/users/y-hwang/orgs",
"received_events_url": "https://api.github.com/users/y-hwang/received_events",
"repos_url": "https://api.github.com/users/y-hwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/y-hwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y-hwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/y-hwang",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Seconding encountering this issue."
] | 1970-01-01T00:00:00.000001 | 1,719 | null | NONE | null | ### Describe the bug
Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets.
Thank you!
### Steps to reproduce the bug
Traceback (most recent call last):
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1354, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3493, in _map_single
writer.write_batch(batch)
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 555, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 243, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 184, in __arrow_array__
out = numpy_to_pyarrow_listarray(data)
File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/features/features.py", line 1394, in numpy_to_pyarrow_listarray
values = pa.ListArray.from_arrays(offsets, values)
File "pyarrow/array.pxi", line 2004, in pyarrow.lib.ListArray.from_arrays
TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array
### Expected behavior
Type should not be a ChunkedArray
### Environment info
datasets v2.14.5
arrow v1.2.3
pyarrow v12.0.1 | null | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6399/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6399/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6397/comments | https://api.github.com/repos/huggingface/datasets/issues/6397/events | https://github.com/huggingface/datasets/issues/6397 | 1,987,622,152 | I_kwDODunzps52eLUI | 6,397 | Raise a different exception for inexisting dataset vs files without known extension | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | COLLABORATOR | null | See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557
We have the same error for:
- https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist
- https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files without a known extension
```
>>> import datasets
>>> datasets.get_dataset_config_names('severo/a_dataset_that_does_not_exist')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/a_dataset_that_does_not_exist/a_dataset_that_does_not_exist.py or any data file in the same directory. Couldn't find 'severo/a_dataset_that_does_not_exist' on the Hugging Face Hub either: FileNotFoundError: Dataset 'severo/a_dataset_that_does_not_exist' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.
>>> datasets.get_dataset_config_names('severo/test_files_without_extension')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/test_files_without_extension/test_files_without_extension.py or any data file in the same directory. Couldn't find 'severo/test_files_without_extension' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in severo/test_files_without_extension.
```
To differentiate, we must parse the error message (only the end is different). We should have a different exception for these two errors. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6397/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6397/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6396/comments | https://api.github.com/repos/huggingface/datasets/issues/6396/events | https://github.com/huggingface/datasets/issues/6396 | 1,987,308,077 | I_kwDODunzps52c-ot | 6,396 | Issue with pyarrow 14.0.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Looks like we should stop using `PyExtensionType` and use `ExtensionType` instead\r\n\r\nsee https://github.com/apache/arrow/commit/f14170976372436ec1d03a724d8d3f3925484ecf",
"https://github.com/huggingface/datasets-server/pull/2089#pullrequestreview-1724449532\r\n\r\n> Yes, I understand now: they have disabled their `PyExtensionType` and we use it in `datasets` for arrays... ",
"related?\r\n\r\nhttps://huggingface.co/datasets/ssbuild/tools_data/discussions/1#654e663b77c8ec680d10479c",
"> related?\r\n>\r\n> https://huggingface.co/datasets/ssbuild/tools_data/discussions/1#654e663b77c8ec680d10479c\r\n\r\nNo, related to https://github.com/huggingface/datasets/issues/5706",
"Running the following is a workaround:\r\n\r\n```\r\nimport pyarrow\r\npyarrow.PyExtensionType.set_auto_load(True)\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,699 | 1970-01-01T00:00:00.000001 | COLLABORATOR | null | See https://github.com/huggingface/datasets-server/pull/2089 for reference
```
from datasets import (Array2D, Dataset, Features)
feature_type = Array2D(shape=(2, 2), dtype="float32")
content = [[0.0, 0.0], [0.0, 0.0]]
features = Features({"col": feature_type})
dataset = Dataset.from_dict({"col": [content]}, features=features)
```
generates
```
/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:648: FutureWarning: pyarrow.PyExtensionType is deprecated and will refuse deserialization by default. Instead, please derive from pyarrow.ExtensionType and implement your own serialization mechanism.
pa.PyExtensionType.__init__(self, self.storage_dtype)
/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:1661: RuntimeWarning: pickle-based deserialization of pyarrow.PyExtensionType subclasses is disabled by default; if you only ingest trusted data files, you may re-enable this using `pyarrow.PyExtensionType.set_auto_load(True)`.
In the future, Python-defined extension subclasses should derive from pyarrow.ExtensionType (not pyarrow.PyExtensionType) and implement their own serialization mechanism.
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:1661: FutureWarning: pyarrow.PyExtensionType is deprecated and will refuse deserialization by default. Instead, please derive from pyarrow.ExtensionType and implement your own serialization mechanism.
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 924, in from_dict
return cls(pa_table, info=info, split=split)
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 693, in __init__
inferred_features = Features.from_arrow_schema(arrow_table.schema)
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1661, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1661, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1381, in generate_from_arrow_type
return Value(dtype=_arrow_to_datasets_dtype(pa_type))
File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 111, in _arrow_to_datasets_dtype
raise ValueError(f"Arrow type {arrow_type} does not have a datasets dtype equivalent.")
ValueError: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6396/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6396/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6395/comments | https://api.github.com/repos/huggingface/datasets/issues/6395/events | https://github.com/huggingface/datasets/issues/6395 | 1,986,484,124 | I_kwDODunzps52Z1ec | 6,395 | Add ability to set lock type | {
"avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4",
"events_url": "https://api.github.com/users/leoleoasd/events{/privacy}",
"followers_url": "https://api.github.com/users/leoleoasd/followers",
"following_url": "https://api.github.com/users/leoleoasd/following{/other_user}",
"gists_url": "https://api.github.com/users/leoleoasd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leoleoasd",
"id": 37735580,
"login": "leoleoasd",
"node_id": "MDQ6VXNlcjM3NzM1NTgw",
"organizations_url": "https://api.github.com/users/leoleoasd/orgs",
"received_events_url": "https://api.github.com/users/leoleoasd/received_events",
"repos_url": "https://api.github.com/users/leoleoasd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leoleoasd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoleoasd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leoleoasd",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"We've replaced our filelock implementation with the `filelock` package, so their repo is the right place to request this feature.\r\n\r\nIn the meantime, the following should work: \r\n```python\r\nimport filelock\r\nfilelock.FileLock = filelock.SoftFileLock\r\n\r\nimport datasets\r\n...\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
Allow setting file lock type, maybe from an environment variable
Currently, it only depends on whether fnctl is available:
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16
### Motivation
In my environment, flock isn't supported on a network attached drive
### Your contribution
I'll be happy to submit a pr. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6395/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6395/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6394/comments | https://api.github.com/repos/huggingface/datasets/issues/6394/events | https://github.com/huggingface/datasets/issues/6394 | 1,985,947,116 | I_kwDODunzps52XyXs | 6,394 | TorchFormatter images (H, W, C) instead of (C, H, W) format | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Here's a PR for that. https://github.com/huggingface/datasets/pull/6402\r\n\r\nIt's not backward compatible, unfortunately. ",
"Just ran into this working on data lib that's attempting to achieve common interfaces across hf datasets, webdataset, native torch style datasets. The defacto standards for image tensors are numpy == HWC, torch.Tensor == CHW. \r\n\r\nI had to drop use of 'torch' formatting because as is (H, W, C) makes it incompatible with pretty much all standard torch vision processing (torchvision, etc) including model inputs themselves... not sure what the breakage scope would be, but might be worth considering a breaking change since I'm not aware of many use cases where a torch.Tensor image is expected to be in HWC form. And if I set the format to 'torch', I'd expect to be able to apply torchvision transforms, etc directly to the output...\r\n\r\nEDIT: For 'torch' output to be compatible with torch conventions (namely torchvision for images), should follow this https://pytorch.org/vision/0.17/transforms.html#supported-input-types-and-conventions\r\n\r\nattn @lhoestq \r\n\r\n",
"We can define something like `.with_format(\"torch\", image_data_format=\"channels_first\")` and recommend using this in the docs maybe ? also cc @NielsRogge ",
"Sounds good to me. I guess it's not allowed to use the channels first format by default for backwards compatibility purposes?",
"This works, but am wondering how widespread the use of the function is for image datasets? My hunch would be that it's not used widely enough with image datasets to favour backwards compat (keeping default channels_last) over clumsiness of needing this to be 'correct' for typical use.. but don't have the data to back that up.",
"I see. I just checked in the HF libraries and it shouldn't break anything. And to be consistent with them we should actually use C H W. For example `transformers` image processors use C H W by default too.\r\n\r\nSo I'm ok with doing a breaking change to make it consistent with `transformers`, `torchvision`, etc.",
"Since it is quite connected, the proposed PR #6402 will not work for monochrome `PIL` images since they only have 2 dimensions as `numpy `arrays. [Torchvision ](https://pytorch.org/vision/stable/_modules/torchvision/transforms/functional.html#pil_to_tensor) adds a channel before permuting. Would that make sense here as well?",
"@Modexus yes, indeed that would make sense as torch expects 1, H, W for monochrome, not H,W as you'd often see in numpy (via PIL), OpenCV, etc.\r\n\r\nThe reference should be the torchvision fn https://pytorch.org/vision/main/_modules/torchvision/transforms/functional.html#pil_to_tensor",
"My PR now should handle monochrome PIL image. Thanks for the heads up :)"
] | 1970-01-01T00:00:00.000001 | 1,712 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy.
However, pytorch normally uses (C, H, W) format.
Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways.
If not using the format it is possible to directly use torchvision transforms but any non-transformed value will not be a tensor.
Is there a reason for this choice?
### Steps to reproduce the bug
```python
from datasets import Dataset, Features, Audio, Image
images = ["path/to/image.png"] * 10
features = Features({"image": Image()})
ds = Dataset.from_dict({"image": images}, features=features)
ds = ds.with_format("torch")
ds[0]["image"].shape
```
```python
torch.Size([512, 512, 4])
```
### Expected behavior
```python
from datasets import Dataset, Features, Audio, Image
images = ["path/to/image.png"] * 10
features = Features({"image": Image()})
ds = Dataset.from_dict({"image": images}, features=features)
ds = ds.with_format("torch")
ds[0]["image"].shape
```
```python
torch.Size([4, 512, 512])
```
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-6.5.9-100.fc37.x86_64-x86_64-with-glibc2.31
- Python version: 3.11.6
- Huggingface_hub version: 0.18.0
- PyArrow version: 14.0.1
- Pandas version: 2.1.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus",
"user_view_type": "public"
} | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6394/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6394/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6393/comments | https://api.github.com/repos/huggingface/datasets/issues/6393/events | https://github.com/huggingface/datasets/issues/6393 | 1,984,913,259 | I_kwDODunzps52T19r | 6,393 | Filter occasionally hangs | {
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dakinggg",
"id": 43149077,
"login": "dakinggg",
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dakinggg",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"It looks like I may not be the first to encounter this: https://github.com/huggingface/datasets/issues/3172",
"Adding some more information, it seems to occur more frequently with large (millions of samples) datasets.",
"More information. My code is structured as (1) load (2) map (3) filter (4) filter. It was always the second filter that failed. Combining the two filters into one seems to reliably work.",
"@lhoestq it'd be great if someone had a chance to look at this. I suspect it is impacting many users given the other issue that I linked.",
"Hi ! Sorry for the late response. Was it happening after the first or the second filter ?\r\n\r\nIt looks like an issue with the garbage collector (which makes it random). Maybe datasets created with `filter` are not always handled properly ? cc @mariosasko",
"It was after the second filter (and combining the two filters into one seemingly resolved it). I obviously haven't tried all settings to know that these details are causal, but it did work for me.",
"Thanks, that's good to know.\r\n\r\nThe stacktrace suggests an issue when `del self._indices` is called, which happens when a filtered dataset falls out of scope. The indices are a PyArrow table memory mapped from disk, so I'm not quite sure how calling `del` on it can cause this issue. We do `del self._indices` to make sure the file on disk is not used anymore by the current process and avoid e.g. permission errors.\r\n\r\nHopefully we can find a way to reproduce this error, otherwise it will be quite hard to understand what happened",
"Yeah, I have a reliable repro, but it is not even close to minimal and uses a dataset I can't share. Perhaps you could try getting close to my setting.\r\n\r\n(1) make a large (~20GB) jsonl with prompt/response pairs\r\n(2) load it on a linux machine (`dataset = load_dataset(...)`)\r\n(3) map a tokenizer to it, with multiprocessing (`tokenized_dataset = dataset.map(...)`)\r\n(4) filter it once based on something, with multiprocessing (`filtered_1 = tokenized_dataset.filter(...)`)\r\n(5) filter it again based on something, with multiprocessing (`filtered_2 = filtered_1.filter(...)`)\r\n\r\nI included the variable names just in case it is relevant that I was creating new datasets each time, not overwriting the same variable.",
"@lhoestq I have another version of the repro that seems fairly reliably. I have lots of jsonl files, and I iteratively load each one with `load_dataset('json', data_files='path/to/my/file.jsonl', streaming=False, split='train')` and then `dataset.map(..., num_proc=<int>)`. This iteration hangs in a random place each time. So seems like there is a bug that hits with _some_ frequency.",
"With `num_proc=None` it works fine.",
"I am also having similar issue to #3172 when trying to tokenize the data. My dataset contains 10M samples. Is there anything that could be done without having to split up the processing into multiple datasets?"
] | 1970-01-01T00:00:00.000001 | 1,709 | null | NONE | null | ### Describe the bug
A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm)
There is a trace produced
```
Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", line 1366, in __del__
if hasattr(self, "_indices"):
File "/usr/lib/python3/dist-packages/composer/core/engine.py", line 123, in sigterm_handler
sys.exit(128 + signal)
SystemExit: 143
```
but I'm not sure if the trace is actually from `datasets`, or from surrounding code that is trying to clean up after datasets gets stuck.
Unfortunately I can't reproduce this issue anywhere close to reliably. It happens infrequently when using `num_procs > 1`. Anecdotally I started seeing it when using larger datasets (~10M samples).
### Steps to reproduce the bug
N/A see description
### Expected behavior
map/filter calls always complete sucessfully
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6393/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6393/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6392/comments | https://api.github.com/repos/huggingface/datasets/issues/6392/events | https://github.com/huggingface/datasets/issues/6392 | 1,984,369,545 | I_kwDODunzps52RxOJ | 6,392 | `push_to_hub` is not robust to hub closing connection | {
"avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4",
"events_url": "https://api.github.com/users/msis/events{/privacy}",
"followers_url": "https://api.github.com/users/msis/followers",
"following_url": "https://api.github.com/users/msis/following{/other_user}",
"gists_url": "https://api.github.com/users/msis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/msis",
"id": 577139,
"login": "msis",
"node_id": "MDQ6VXNlcjU3NzEzOQ==",
"organizations_url": "https://api.github.com/users/msis/orgs",
"received_events_url": "https://api.github.com/users/msis/received_events",
"repos_url": "https://api.github.com/users/msis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/msis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/msis",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! We made some improvements to `push_to_hub` to make it more robust a couple of weeks ago but haven't published a release in the meantime, so it would help if you could install `datasets` from `main` (`pip install https://github.com/huggingface/datasets`) and let us know if this improved version of `push_to_hub` resolves the issue (in case the `ConnectionError` happens, re-running `push_to_hub` should be faster now).\r\n\r\nAlso, note that the previous implementation retries the upload, but sometimes this is not enough, so re-running the op is the only option.",
"The update helped push more data.\r\nHowever it still crashed a little later:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/6c/33/6c33b3be1463a656e43c7a4f2d43c4a1cdae6e9d81fff87f69167ef25ccb1b88/5f53cb57cf2a52ca0d4c2166a69a6714c64fcdbb7cb8936dfa5b11ac60058e5f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20231110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231110T011254Z&X-Amz-Expires=86400&X-Amz-Signature=74e3e33c09ac4e7c6ac887aaee8d489f068869abbe1ee6d58a910fb18d0601d4&X-Amz-SignedHeaders=host&partNumber=13&uploadId=kQwunNkunfmT9D8GulQu_ufw1BTZtRA6wEUI4hnYOjytfdf.GKxDETgMr4wm8_0WNF2yGaNco_0h3JAGm4l9KV1N0nqr5XXyUCbs1ROmHP475fn9FIhc1umWQLEDc97V&x-id=UploadPart\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 391, in _wrapped_lfs_upload\r\n lfs_upload(operation=operation, lfs_batch_action=batch_action, token=token)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 223, in lfs_upload\r\n _upload_multi_part(operation=operation, header=header, chunk_size=chunk_size, upload_url=upload_action[\"href\"])\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 319, in _upload_multi_part\r\n else _upload_parts_iteratively(operation=operation, sorted_parts_urls=sorted_parts_urls, chunk_size=chunk_size)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 376, in _upload_parts_iteratively\r\n hf_raise_for_status(part_upload_res)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 330, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/6c/33/6c33b3be1463a656e43c7a4f2d43c4a1cdae6e9d81fff87f69167ef25ccb1b88/5f53cb57cf2a52ca0d4c2166a69a6714c64fcdbb7cb8936dfa5b11ac60058e5f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20231110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231110T011254Z&X-Amz-Expires=86400&X-Amz-Signature=74e3e33c09ac4e7c6ac887aaee8d489f068869abbe1ee6d58a910fb18d0601d4&X-Amz-SignedHeaders=host&partNumber=13&uploadId=kQwunNkunfmT9D8GulQu_ufw1BTZtRA6wEUI4hnYOjytfdf.GKxDETgMr4wm8_0WNF2yGaNco_0h3JAGm4l9KV1N0nqr5XXyUCbs1ROmHP475fn9FIhc1umWQLEDc97V&x-id=UploadPart\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"convert_to_hf.py\", line 121, in <module>\r\n main()\r\n File \"convert_to_hf.py\", line 109, in main\r\n audio_dataset.push_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1699, in push_to_hub\r\n split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 5215, in _push_parquet_shards_to_hub\r\n _retry(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 290, in _retry\r\n return func(*func_args, **func_kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 3665, in preupload_lfs_files\r\n _upload_lfs_files(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 401, in _upload_lfs_files\r\n _wrapped_lfs_upload(filtered_actions[0])\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 393, in _wrapped_lfs_upload\r\n raise RuntimeError(f\"Error while uploading '{operation.path_in_repo}' to the Hub.\") from exc\r\nRuntimeError: Error while uploading 'batch_20/train-00206-of-00261.parquet' to the Hub.\r\n```",
"I think the previous implementation was actually better: it pushes to the hub every shard. So if it fails, as long as the shards have the same checksum, it will skip the ones that have been pushed.\r\n\r\nThe implementation in `main` pushes commits at the end, so when it fails, there are no commits and therefore restarts from the beginning every time.\r\n\r\nBelow is the another error log from another run with `main`. I've reverting back to the current release as it does the job for me.\r\n\r\n```\r\nUploading the dataset shards: 86%|████████▌ | 224/261 [21:46<03:35, 5.83s/it]s]\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/6c/33/6c33b3be1463a656e43c7a4f2d43c4a1cdae6e9d81fff87f69167ef25ccb1b88/97e68d7a5d4a747ffaa249fc09798e961d621fe4170599e6100197f7733f321d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20231110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231110T145155Z&X-Amz-Expires=86400&X-Amz-Signature=5341e4b34dc325737f92dc9005c4a31e4d3f9a3d3d853b267e01915260acf629&X-Amz-SignedHeaders=host&partNumber=27&uploadId=NRD0izEWv7MPtC2bYrm5VJ4XgIbHctKNguR7zS1UhGOOrXwBJvigrOywBvQBnS9sxiy0J0ma9sNog8S13nIdTdE9p60MIITTstUFeKvLHSxpU.a527QED1JVYzJ.9xA0&x-id=UploadPart\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 391, in _wrapped_lfs_upload\r\n lfs_upload(operation=operation, lfs_batch_action=batch_action, token=token)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 223, in lfs_upload\r\n _upload_multi_part(operation=operation, header=header, chunk_size=chunk_size, upload_url=upload_action[\"href\"])\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 319, in _upload_multi_part\r\n else _upload_parts_iteratively(operation=operation, sorted_parts_urls=sorted_parts_urls, chunk_size=chunk_size)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 376, in _upload_parts_iteratively\r\n hf_raise_for_status(part_upload_res)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 330, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/6c/33/6c33b3be1463a656e43c7a4f2d43c4a1cdae6e9d81fff87f69167ef25ccb1b88/97e68d7a5d4a747ffaa249fc09798e961d621fe4170599e6100197f7733f321d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20231110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231110T145155Z&X-Amz-Expires=86400&X-Amz-Signature=5341e4b34dc325737f92dc9005c4a31e4d3f9a3d3d853b267e01915260acf629&X-Amz-SignedHeaders=host&partNumber=27&uploadId=NRD0izEWv7MPtC2bYrm5VJ4XgIbHctKNguR7zS1UhGOOrXwBJvigrOywBvQBnS9sxiy0J0ma9sNog8S13nIdTdE9p60MIITTstUFeKvLHSxpU.a527QED1JVYzJ.9xA0&x-id=UploadPart\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"convert_to_hf.py\", line 121, in <module>\r\n main()\r\n File \"convert_to_hf.py\", line 109, in main\r\n audio_dataset.push_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1699, in push_to_hub\r\n p, glob_pattern_to_regex(PUSH_TO_HUB_WITHOUT_METADATA_CONFIGS_SPLIT_PATTERN_SHARDED)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 5215, in _push_parquet_shards_to_hub\r\n token = token if token is not None else HfFolder.get_token()\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 290, in _retry\r\n return func(*func_args, **func_kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 3665, in preupload_lfs_files\r\n _upload_lfs_files(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 401, in _upload_lfs_files\r\n _wrapped_lfs_upload(filtered_actions[0])\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 393, in _wrapped_lfs_upload\r\n raise RuntimeError(f\"Error while uploading '{operation.path_in_repo}' to the Hub.\") from exc\r\nRuntimeError: Error while uploading 'batch_20/train-00224-of-00261.parquet' to the Hub.\r\n```",
"There's a new error from the hub now:\r\n```\r\nPushing dataset shards to the dataset hub: 49%|████▉ | 128/261 [11:38<12:05, 5.45s/it]\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/tarteel-ai/tawseem/commit/main\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"convert_to_hf.py\", line 121, in <module>\r\n main()\r\n File \"convert_to_hf.py\", line 109, in main\r\n audio_dataset.push_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1641, in push_to_hub\r\n repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 5308, in _push_parquet_shards_to_hub\r\n _retry(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 293, in _retry\r\n raise err\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 290, in _retry\r\n return func(*func_args, **func_kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 1045, in _inner\r\n return fn(self, *args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 3850, in upload_file\r\n commit_info = self.create_commit(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 1045, in _inner\r\n return fn(self, *args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 3237, in create_commit\r\n hf_raise_for_status(commit_resp, endpoint_name=\"commit\")\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 330, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/tarteel-ai/tawseem/commit/main (Request ID: Root=1-654e48e6-598511b14413bb293fa67084;783522b4-66f9-4f8a-8a74-2accf7cabd17)\r\n\r\nYou have exceeded our hourly quotas for action: commit. We invite you to retry later.\r\n```\r\n\r\nAt least this is more explicit from the server side.",
"> think the previous implementation was actually better: it pushes to the hub every shard. So if it fails, as long as the shards have the same checksum, it will skip the ones that have been pushed.\r\n>\r\n>The implementation in main pushes commits at the end, so when it fails, there are no commits and therefore restarts from the beginning every time.\r\n>\r\n>Below is the another error log from another run with main. I've reverting back to the current release as it does the job for me.\r\n\r\nThe `preupload` step is instant for the already uploaded shards, so only the Parquet conversion is repeated without uploading the actual Parquet data (only to check the SHAs). The previous implementation manually checks the Parquet shard's fingerprint to resume uploading, so the current implementation is cleaner.\r\n\r\n> You have exceeded our hourly quotas for action: commit. We invite you to retry later.\r\n\r\nThis is the problem with the previous implementation. If the number of shards is large, it creates too many commits for the Hub in a short period.",
"But I agree that the `500 Server Error` returned by the Hub is annoying. Earlier today, I also got it on a small 5GB dataset (with 500 MB shards).\r\n\r\n@Wauplin @julien-c Is there something we can do about this?",
"@mariosasko can't do much if AWS raises a HTTP 500 unfortunately (we are simply pushing data to a S3 bucket).\r\nWhat we can do is to add a retry mechanism in the multi-part upload logic here: https://github.com/huggingface/huggingface_hub/blob/c972cba1fecb456a7b3325cdd1fdbcc425f21f94/src/huggingface_hub/lfs.py#L370 :confused: ",
"@Wauplin That code already retries the request using `http_backoff`, no?",
"> That code already retries the request using http_backoff, no?\r\n\r\nCurrently only on HTTP 503 by default. We should add 500 as well (and hope it is a transient error from AWS)",
"Opened a PR to retry in case S3 raises HTTP 500. Will also retry on any `ConnectionError` (connection reset by peer, connection lost,...). Hopefully this should make the upload process more robust to transient errors.",
"I still get the same error, using `push_to_hub`. Using `git lfs` and pushing the files solved it for me.",
"@BEpresent the fix has not been released yet. You can expect a release of `huggingface_hub` (with this fix) today or tomorrow :)"
] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error:
```
Pushing dataset shards to the dataset hub: 32%|███▏ | 54/171 [06:38<14:23, 7.38s/it]
Traceback (most recent call last):
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 316, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 285, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 799, in urlopen
retries = retries.increment(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/util/retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/packages/six.py", line 769, in reraise
raise value.with_traceback(tb)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 316, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 285, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 383, in _wrapped_lfs_upload
lfs_upload(operation=operation, lfs_batch_action=batch_action, token=token)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 223, in lfs_upload
_upload_multi_part(operation=operation, header=header, chunk_size=chunk_size, upload_url=upload_action["href"])
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 319, in _upload_multi_part
else _upload_parts_iteratively(operation=operation, sorted_parts_urls=sorted_parts_urls, chunk_size=chunk_size)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 375, in _upload_parts_iteratively
part_upload_res = http_backoff("PUT", part_upload_url, data=fileobj_slice)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 258, in http_backoff
response = session.request(method=method, url=url, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 63, in send
return super().send(request, *args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2bab8c06-b701-4266-aead-fe2e0dc0e3ed)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "convert_to_hf.py", line 116, in <module>
main()
File "convert_to_hf.py", line 108, in main
audio_dataset.push_to_hub(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py", line 1641, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 5308, in _push_parquet_shards_to_hub
_retry(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 290, in _retry
return func(*func_args, **func_kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file
commit_info = self.create_commit(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 2695, in create_commit
upload_lfs_files(
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 393, in upload_lfs_files
_wrapped_lfs_upload(filtered_actions[0])
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 385, in _wrapped_lfs_upload
raise RuntimeError(f"Error while uploading '{operation.path_in_repo}' to the Hub.") from exc
RuntimeError: Error while uploading 'batch_19/train-00054-of-00171-932beb4082c034bf.parquet' to the Hub.
```
The function should retry if the operations fails, or at least offer a way to recover after such a failure.
Right now, calling the function again will start sending all the parquets files leading to duplicates in the repository, with no guarantee that it will actually be pushed.
Previously, it would crash with an error 400 #4677 .
### Steps to reproduce the bug
Any large dataset pushed the hub:
```py
audio_dataset.push_to_hub(
repo_id="org/dataset",
)
```
### Expected behavior
`push_to_hub` should have an option for max retries or resume.
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.15.0-1044-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6392/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6392/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6389/comments | https://api.github.com/repos/huggingface/datasets/issues/6389/events | https://github.com/huggingface/datasets/issues/6389 | 1,983,545,744 | I_kwDODunzps52OoGQ | 6,389 | Index 339 out of range for dataset of size 339 <-- save_to_file() | {
"avatar_url": "https://avatars.githubusercontent.com/u/20318973?v=4",
"events_url": "https://api.github.com/users/jaggzh/events{/privacy}",
"followers_url": "https://api.github.com/users/jaggzh/followers",
"following_url": "https://api.github.com/users/jaggzh/following{/other_user}",
"gists_url": "https://api.github.com/users/jaggzh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jaggzh",
"id": 20318973,
"login": "jaggzh",
"node_id": "MDQ6VXNlcjIwMzE4OTcz",
"organizations_url": "https://api.github.com/users/jaggzh/orgs",
"received_events_url": "https://api.github.com/users/jaggzh/received_events",
"repos_url": "https://api.github.com/users/jaggzh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jaggzh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaggzh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jaggzh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi! Can you make the above reproducer self-contained by adding code that generates the data?",
"I managed a workaround eventually but I don't know what it was (I made a lot of changes to seq2seq). I'll try to include generating code in the future. (If I close, I don't know if you see it. Feel free to close; I'll re-open if I encounter it again (if I can))."
] | 1970-01-01T00:00:00.000001 | 1,700 | null | NONE | null | ### Describe the bug
When saving out some Audio() data.
The data is audio recordings with associated 'sentences'.
(They use the audio 'bytes' approach because they're clips within audio files).
Code is below the traceback (I can't upload the voice audio/text (it's not even me)).
```
Traceback (most recent call last):
File "/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py", line 156, in <module>
create_dataset(args)
File "/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py", line 138, in create_dataset
hf_dataset.save_to_disk(args.outds, max_shard_size='50MB')
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 1531, in save_to_disk
for kwargs in kwargs_per_job:
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 1508, in <genexpr>
"shard": self.shard(num_shards=num_shards, index=shard_idx, contiguous=True),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 4609, in shard
return self.select(
^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 556, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 3797, in select
return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 556, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 3857, in _select_contiguous
_check_valid_indices_value(start, len(self))
File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 648, in _check_valid_indices_value
raise IndexError(f"Index {index} out of range for dataset of size {size}.")
IndexError: Index 339 out of range for dataset of size 339.
```
### Steps to reproduce the bug
(I had to set the default max batch size down due to a different bug... or maybe it's related: https://github.com/huggingface/datasets/issues/5717)
```python3
#!/usr/bin/env python3
import argparse
import os
from pathlib import Path
import soundfile as sf
import datasets
datasets.config.DEFAULT_MAX_BATCH_SIZE=35
from datasets import Features, Array2D, Value, Dataset, Sequence, Audio
import numpy as np
import librosa
import sys
import soundfile as sf
import io
import logging
logging.basicConfig(level=logging.DEBUG, filename='debug.log', filemode='w',
format='%(name)s - %(levelname)s - %(message)s')
# Define the arguments for the command-line interface
def parse_args():
parser = argparse.ArgumentParser(description="Create a Huggingface dataset from labeled audio files.")
parser.add_argument("--indir_labeled", action="append", help="Directory containing labeled audio files.", required=True)
parser.add_argument("--outds", help="Path to save the dataset file.", required=True)
parser.add_argument("--max_clips", type=int, help="Max count of audio samples to add to the dataset.", default=None)
parser.add_argument("-r", "--sr", type=int, help="Sample rate for the audio files.", default=16000)
parser.add_argument("--no-resample", action="store_true", help="Disable resampling of the audio files.")
parser.add_argument("--max_clip_secs", type=float, help="Max length of audio clips in seconds.", default=3.0)
parser.add_argument("-v", "--verbose", action='count', default=1, help="Increase verbosity")
return parser.parse_args()
# Convert the NumPy arrays to audio bytes in WAV format
def numpy_to_bytes(audio_array, sampling_rate=16000):
with io.BytesIO() as bytes_io:
sf.write(bytes_io, audio_array, samplerate=sampling_rate,
format='wav', subtype='FLOAT') # float32
return bytes_io.getvalue()
# Function to find audio and label files in a directory
def find_audio_label_pairs(indir_labeled):
audio_label_pairs = []
for root, _, files in os.walk(indir_labeled):
for file in files:
if file.endswith(('.mp3', '.wav', '.aac', '.flac')):
audio_path = Path(root) / file
if args.verbose>1:
print(f'File: {audio_path}')
label_path = audio_path.with_suffix('.labels.txt')
if label_path.exists():
if args.verbose>0:
print(f' Pair: {audio_path}')
audio_label_pairs.append((audio_path, label_path))
return audio_label_pairs
def process_audio_label_pair(audio_path, label_path, sampling_rate, no_resample, max_clip_secs):
# Read the label file
with open(label_path, 'r') as label_file:
labels = label_file.readlines()
# Load the full audio file
full_audio, current_sr = sf.read(audio_path)
if not no_resample and current_sr != sampling_rate:
# You can use librosa.resample here if librosa is available
full_audio = librosa.resample(full_audio, orig_sr=current_sr, target_sr=sampling_rate)
audio_segments = []
sentences = []
# Process each label
for label in labels:
start_secs, end_secs, label_text = label.strip().split('\t')
start_sample = int(float(start_secs) * sampling_rate)
end_sample = int(float(end_secs) * sampling_rate)
# Extract segment and truncate or pad to max_clip_secs
audio_segment = full_audio[start_sample:end_sample]
max_samples = int(max_clip_secs * sampling_rate)
if len(audio_segment) > max_samples: # Truncate
audio_segment = audio_segment[:max_samples]
elif len(audio_segment) < max_samples: # Pad
padding = np.zeros(max_samples - len(audio_segment), dtype=audio_segment.dtype)
audio_segment = np.concatenate((audio_segment, padding))
audio_segment = numpy_to_bytes(audio_segment)
audio_data = {
'path': str(audio_path),
'bytes': audio_segment,
}
audio_segments.append(audio_data)
sentences.append(label_text)
return audio_segments, sentences
# Main function to create the dataset
def create_dataset(args):
audio_label_pairs = []
for indir in args.indir_labeled:
audio_label_pairs.extend(find_audio_label_pairs(indir))
# Initialize our dataset data
dataset_data = {
'path': [], # This will be a list of strings
'audio': [], # This will be a list of dictionaries
'sentence': [], # This will be a list of strings
}
# Process each audio-label pair and add the data to the dataset
for audio_path, label_path in audio_label_pairs[:args.max_clips]:
audio_segments, sentences = process_audio_label_pair(audio_path, label_path, args.sr, args.no_resample, args.max_clip_secs)
if audio_segments and sentences:
for audio_data, sentence in zip(audio_segments, sentences):
if args.verbose>1:
print(f'Appending {audio_data["path"]}')
dataset_data['path'].append(audio_data['path'])
dataset_data['audio'].append({
'path': audio_data['path'],
'bytes': audio_data['bytes'],
})
dataset_data['sentence'].append(sentence)
features = Features({
'path': Value('string'), # Path is redundant in common voice set also
'audio': Audio(sampling_rate=16000),
'sentence': Value('string'),
})
hf_dataset = Dataset.from_dict(dataset_data, features=features)
for key in dataset_data:
for i, item in enumerate(dataset_data[key]):
if item is None or (isinstance(item, bytes) and len(item) == 0):
logging.error(f"Invalid {key} at index {i}: {item}")
import ipdb; ipdb.set_trace(context=16); pass
hf_dataset.save_to_disk(args.outds, max_shard_size='50MB')
# try:
# hf_dataset.save_to_disk(args.outds)
# except TypeError as e:
# # If there's a TypeError, log the exception and the dataset data that might have caused it
# logging.exception("An error occurred while saving the dataset.")
# import ipdb; ipdb.set_trace(context=16); pass
# for key in dataset_data:
# logging.debug(f"{key} length: {len(dataset_data[key])}")
# if key == 'audio':
# # Log the first 100 bytes of the audio data to avoid huge log files
# for i, audio in enumerate(dataset_data[key]):
# logging.debug(f"Audio {i}: {audio['bytes'][:100]}")
# raise
# Run the script
if __name__ == "__main__":
args = parse_args()
create_dataset(args)
```
### Expected behavior
It shouldn't fail.
### Environment info
- `datasets` version: 2.14.7.dev0
- Platform: Linux-6.1.0-13-amd64-x86_64-with-glibc2.36
- Python version: 3.11.2
- `huggingface_hub` version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.2
- `fsspec` version: 2023.9.2
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6389/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6388/comments | https://api.github.com/repos/huggingface/datasets/issues/6388/events | https://github.com/huggingface/datasets/issues/6388 | 1,981,136,093 | I_kwDODunzps52Fbzd | 6,388 | How to create 3d medical imgae dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/41177312?v=4",
"events_url": "https://api.github.com/users/QingYunA/events{/privacy}",
"followers_url": "https://api.github.com/users/QingYunA/followers",
"following_url": "https://api.github.com/users/QingYunA/following{/other_user}",
"gists_url": "https://api.github.com/users/QingYunA/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QingYunA",
"id": 41177312,
"login": "QingYunA",
"node_id": "MDQ6VXNlcjQxMTc3MzEy",
"organizations_url": "https://api.github.com/users/QingYunA/orgs",
"received_events_url": "https://api.github.com/users/QingYunA/received_events",
"repos_url": "https://api.github.com/users/QingYunA/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QingYunA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QingYunA/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QingYunA",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,699 | null | NONE | null | ### Feature request
I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')
### Motivation
help us to upload 3d medical dataset to huggingface!
### Your contribution
I'll submit a PR if I find a way to add this feature | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6388/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6388/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6387/comments | https://api.github.com/repos/huggingface/datasets/issues/6387/events | https://github.com/huggingface/datasets/issues/6387 | 1,980,224,020 | I_kwDODunzps52B9IU | 6,387 | How to load existing downloaded dataset ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/73068772?v=4",
"events_url": "https://api.github.com/users/liming-ai/events{/privacy}",
"followers_url": "https://api.github.com/users/liming-ai/followers",
"following_url": "https://api.github.com/users/liming-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/liming-ai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liming-ai",
"id": 73068772,
"login": "liming-ai",
"node_id": "MDQ6VXNlcjczMDY4Nzcy",
"organizations_url": "https://api.github.com/users/liming-ai/orgs",
"received_events_url": "https://api.github.com/users/liming-ai/received_events",
"repos_url": "https://api.github.com/users/liming-ai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liming-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liming-ai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liming-ai",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Feel free to use `dataset.save_to_disk(...)`, then scp the directory containing the saved dataset and reload it on your other machine using `dataset = load_from_disk(...)`"
] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | Hi @mariosasko @lhoestq @katielink
Thanks for your contribution and hard work.
### Feature request
First, I download a dataset as normal by:
```
from datasets import load_dataset
dataset = load_dataset('username/data_name', cache_dir='data')
```
The dataset format in `data` directory will be:
```
-data
|-data_name
|-test-00000-of-00001-bf4c733542e35fcb.parquet
|-train-00000-of-00001-2a1df75c6bce91ab.parquet
```
Then I use SCP to clone this dataset into another machine, and then try:
```
from datasets import load_dataset
dataset = load_dataset('data/data_name') # load from local path
```
This leads to re-generating training and validation split for each time, and the disk quota will be duplicated occupation.
How can I just load the dataset without generating and saving these splits again?
### Motivation
I do not want to download the same dataset in two machines, scp is much faster and better than HuggingFace API. I hope we can directly load the downloaded datasets (.parquest)
### Your contribution
Please refer to the feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/73068772?v=4",
"events_url": "https://api.github.com/users/liming-ai/events{/privacy}",
"followers_url": "https://api.github.com/users/liming-ai/followers",
"following_url": "https://api.github.com/users/liming-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/liming-ai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liming-ai",
"id": 73068772,
"login": "liming-ai",
"node_id": "MDQ6VXNlcjczMDY4Nzcy",
"organizations_url": "https://api.github.com/users/liming-ai/orgs",
"received_events_url": "https://api.github.com/users/liming-ai/received_events",
"repos_url": "https://api.github.com/users/liming-ai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liming-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liming-ai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liming-ai",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6387/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6387/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6386/comments | https://api.github.com/repos/huggingface/datasets/issues/6386/events | https://github.com/huggingface/datasets/issues/6386 | 1,979,878,014 | I_kwDODunzps52Aop- | 6,386 | Formatting overhead | {
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d-miketa",
"id": 320321,
"login": "d-miketa",
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d-miketa",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Ah I think the `line-profiler` log is off-by-one and it is in fact the `extract_batch` method that's taking forever. Will investigate further.",
"I tracked it down to a quirk of my setup. Apologies."
] | 1970-01-01T00:00:00.000001 | 1,699 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new instances of `self.python_arrow_extractor`. I admit I'm confused why that could be the case - as far as I can tell there's no complex `__init__` logic to execute.

### Steps to reproduce the bug
1. Set up a dataset `ds` with potentially several (4+) columns (not sure if this is necessary, but it did at one point of the investigation make overhead worse)
2. Process it using a custom transform, `ds = ds.with_transform(transform_func)`
3. Decorate this function https://github.com/huggingface/datasets/blob/main/src/datasets/formatting/formatting.py#L512 with `@profile` from https://pypi.org/project/line-profiler/
4. Profile with `$ kernprof -l script_to_profile.py`
### Expected behavior
Batch formatting should have acceptable overhead.
### Environment info
```
datasets=2.14.6
pyarrow=14.0.0
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d-miketa",
"id": 320321,
"login": "d-miketa",
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d-miketa",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6386/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6386/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6385/comments | https://api.github.com/repos/huggingface/datasets/issues/6385/events | https://github.com/huggingface/datasets/issues/6385 | 1,979,308,338 | I_kwDODunzps51-dky | 6,385 | Get an error when i try to concatenate the squad dataset with my own dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/149378500?v=4",
"events_url": "https://api.github.com/users/CCDXDX/events{/privacy}",
"followers_url": "https://api.github.com/users/CCDXDX/followers",
"following_url": "https://api.github.com/users/CCDXDX/following{/other_user}",
"gists_url": "https://api.github.com/users/CCDXDX/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CCDXDX",
"id": 149378500,
"login": "CCDXDX",
"node_id": "U_kgDOCOdVxA",
"organizations_url": "https://api.github.com/users/CCDXDX/orgs",
"received_events_url": "https://api.github.com/users/CCDXDX/received_events",
"repos_url": "https://api.github.com/users/CCDXDX/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CCDXDX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CCDXDX/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CCDXDX",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The `answers.text` field in the JSON dataset needs to be a list of strings, not a string.\r\n\r\nSo, here is the fixed code:\r\n```python\r\nfrom huggingface_hub import notebook_login\r\nfrom datasets import load_dataset\r\n\r\n\r\n\r\nnotebook_login(\"mymailadresse\", \"mypassword\")\r\nsquad = load_dataset(\"squad\", split=\"train[:5000]\")\r\nsquad = squad.train_test_split(test_size=0.2)\r\ndataset1 = squad[\"train\"]\r\n\r\n\r\n\r\n\r\nimport json\r\n\r\nmybase = [\r\n {\r\n \"id\": \"1\",\r\n \"context\": \"She lives in Nantes\",\r\n \"question\": \"Where does she live?\",\r\n \"answers\": {\r\n \"text\": [\"Nantes\"],\r\n \"answer_start\": [13],\r\n }\r\n }\r\n]\r\n\r\n\r\n\r\n\r\n# Save the data to a JSON file\r\njson_file_path = r\"data\"\r\nwith open(json_file_path, \"w\", encoding= \"utf-8\") as json_file:\r\n json.dump(mybase, json_file, indent=4)\r\n\r\n\r\n\r\n\r\n# Load the JSON file as a dataset\r\ncustom_dataset = load_dataset(\"json\", data_files=json_file_path, features=dataset1.features)\r\n\r\n\r\n# Access the train split\r\ntrain_dataset = custom_dataset[\"train\"]\r\n\r\n\r\nfrom datasets import concatenate_datasets\r\n\r\n\r\n# Concatenate the datasets\r\nconcatenated_dataset = concatenate_datasets([train_dataset, dataset1])\r\n```",
"Thank you @mariosasko for your help ! It works !"
] | 1970-01-01T00:00:00.000001 | 1,699 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Hello,
I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last):
Cell In[9], line 1
concatenated_dataset = concatenate_datasets([train_dataset, dataset1])
File ~\anaconda3\Lib\site-packages\datasets\combine.py:213 in concatenate_datasets
return _concatenate_map_style_datasets(dsets, info=info, split=split, axis=axis)
File ~\anaconda3\Lib\site-packages\datasets\arrow_dataset.py:6002 in _concatenate_map_style_datasets
_check_if_features_can_be_aligned([dset.features for dset in dsets])
File ~\anaconda3\Lib\site-packages\datasets\features\features.py:2122 in _check_if_features_can_be_aligned
raise ValueError(
ValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Value(dtype='string', id=None)} or Value("null").
### Steps to reproduce the bug
```python
from huggingface_hub import notebook_login
from datasets import load_dataset
notebook_login("mymailadresse", "mypassword")
squad = load_dataset("squad", split="train[:5000]")
squad = squad.train_test_split(test_size=0.2)
dataset1 = squad["train"]
import json
mybase = [
{
"id": "1",
"context": "She lives in Nantes",
"question": "Where does she live?",
"answers": {
"text": "Nantes",
"answer_start": [13],
}
}
]
# Save the data to a JSON file
json_file_path = r"C:\Users\mypath\thefile.json"
with open(json_file_path, "w", encoding= "utf-8") as json_file:
json.dump(mybase, json_file, indent=4)
# Load the JSON file as a dataset
custom_dataset = load_dataset("json", data_files=json_file_path)
# Access the train split
train_dataset = custom_dataset["train"]
from datasets import concatenate_datasets
# Concatenate the datasets
concatenated_dataset = concatenate_datasets([train_dataset, dataset1])
```
### Expected behavior
I would expect the two datasets to be concatenated without error. The len(dataset1) is equal to 4000 and the len(train_dataset) is equal to 1 so I would exepect concatenated_dataset to be created and having lenght 4001.
### Environment info
Python 3.11.4 and using windows
Thank you for your help | {
"avatar_url": "https://avatars.githubusercontent.com/u/149378500?v=4",
"events_url": "https://api.github.com/users/CCDXDX/events{/privacy}",
"followers_url": "https://api.github.com/users/CCDXDX/followers",
"following_url": "https://api.github.com/users/CCDXDX/following{/other_user}",
"gists_url": "https://api.github.com/users/CCDXDX/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CCDXDX",
"id": 149378500,
"login": "CCDXDX",
"node_id": "U_kgDOCOdVxA",
"organizations_url": "https://api.github.com/users/CCDXDX/orgs",
"received_events_url": "https://api.github.com/users/CCDXDX/received_events",
"repos_url": "https://api.github.com/users/CCDXDX/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CCDXDX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CCDXDX/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CCDXDX",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6385/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6385/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6384/comments | https://api.github.com/repos/huggingface/datasets/issues/6384/events | https://github.com/huggingface/datasets/issues/6384 | 1,979,117,069 | I_kwDODunzps519u4N | 6,384 | Load the local dataset folder from other place | {
"avatar_url": "https://avatars.githubusercontent.com/u/54439582?v=4",
"events_url": "https://api.github.com/users/OrangeSodahub/events{/privacy}",
"followers_url": "https://api.github.com/users/OrangeSodahub/followers",
"following_url": "https://api.github.com/users/OrangeSodahub/following{/other_user}",
"gists_url": "https://api.github.com/users/OrangeSodahub/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OrangeSodahub",
"id": 54439582,
"login": "OrangeSodahub",
"node_id": "MDQ6VXNlcjU0NDM5NTgy",
"organizations_url": "https://api.github.com/users/OrangeSodahub/orgs",
"received_events_url": "https://api.github.com/users/OrangeSodahub/received_events",
"repos_url": "https://api.github.com/users/OrangeSodahub/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OrangeSodahub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OrangeSodahub/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OrangeSodahub",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Solved"
] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | This is from https://github.com/huggingface/diffusers/issues/5573
| {
"avatar_url": "https://avatars.githubusercontent.com/u/54439582?v=4",
"events_url": "https://api.github.com/users/OrangeSodahub/events{/privacy}",
"followers_url": "https://api.github.com/users/OrangeSodahub/followers",
"following_url": "https://api.github.com/users/OrangeSodahub/following{/other_user}",
"gists_url": "https://api.github.com/users/OrangeSodahub/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OrangeSodahub",
"id": 54439582,
"login": "OrangeSodahub",
"node_id": "MDQ6VXNlcjU0NDM5NTgy",
"organizations_url": "https://api.github.com/users/OrangeSodahub/orgs",
"received_events_url": "https://api.github.com/users/OrangeSodahub/received_events",
"repos_url": "https://api.github.com/users/OrangeSodahub/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OrangeSodahub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OrangeSodahub/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OrangeSodahub",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6384/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6384/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6383/comments | https://api.github.com/repos/huggingface/datasets/issues/6383/events | https://github.com/huggingface/datasets/issues/6383 | 1,978,189,389 | I_kwDODunzps516MZN | 6,383 | imagenet-1k downloads over and over | {
"avatar_url": "https://avatars.githubusercontent.com/u/6847529?v=4",
"events_url": "https://api.github.com/users/seann999/events{/privacy}",
"followers_url": "https://api.github.com/users/seann999/followers",
"following_url": "https://api.github.com/users/seann999/following{/other_user}",
"gists_url": "https://api.github.com/users/seann999/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/seann999",
"id": 6847529,
"login": "seann999",
"node_id": "MDQ6VXNlcjY4NDc1Mjk=",
"organizations_url": "https://api.github.com/users/seann999/orgs",
"received_events_url": "https://api.github.com/users/seann999/received_events",
"repos_url": "https://api.github.com/users/seann999/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/seann999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seann999/subscriptions",
"type": "User",
"url": "https://api.github.com/users/seann999",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Have you solved this problem?"
] | 1970-01-01T00:00:00.000001 | 1,718 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
What could be causing this?
```
$ python3
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> load_dataset("imagenet-1k")
Downloading builder script: 100%|██████████| 4.72k/4.72k [00:00<00:00, 7.51MB/s]
Downloading readme: 100%|███████████████████| 85.4k/85.4k [00:00<00:00, 510kB/s]
Downloading extra modules: 100%|████████████| 46.4k/46.4k [00:00<00:00, 300kB/s]
Downloading data: 100%|████████████████████| 29.1G/29.1G [19:36<00:00, 24.8MB/s]
Downloading data: 100%|████████████████████| 29.3G/29.3G [08:38<00:00, 56.5MB/s]
Downloading data: 100%|████████████████████| 29.0G/29.0G [09:26<00:00, 51.2MB/s]
Downloading data: 100%|████████████████████| 29.2G/29.2G [09:38<00:00, 50.6MB/s]
Downloading data: 100%|███████████████████▉| 29.2G/29.2G [09:37<00:00, 44.1MB/s^Downloading data: 0%| | 106M/29.1G [00:05<23:49, 20.3MB/s]
```
### Steps to reproduce the bug
See above commands/code
### Expected behavior
imagenet-1k is downloaded
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.15.1
- PyArrow version: 14.0.0
- Pandas version: 1.5.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6847529?v=4",
"events_url": "https://api.github.com/users/seann999/events{/privacy}",
"followers_url": "https://api.github.com/users/seann999/followers",
"following_url": "https://api.github.com/users/seann999/following{/other_user}",
"gists_url": "https://api.github.com/users/seann999/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/seann999",
"id": 6847529,
"login": "seann999",
"node_id": "MDQ6VXNlcjY4NDc1Mjk=",
"organizations_url": "https://api.github.com/users/seann999/orgs",
"received_events_url": "https://api.github.com/users/seann999/received_events",
"repos_url": "https://api.github.com/users/seann999/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/seann999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seann999/subscriptions",
"type": "User",
"url": "https://api.github.com/users/seann999",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6383/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6383/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6382/comments | https://api.github.com/repos/huggingface/datasets/issues/6382/events | https://github.com/huggingface/datasets/issues/6382 | 1,977,400,799 | I_kwDODunzps513L3f | 6,382 | Add CheXpert dataset for vision | {
"avatar_url": "https://avatars.githubusercontent.com/u/61241031?v=4",
"events_url": "https://api.github.com/users/SauravMaheshkar/events{/privacy}",
"followers_url": "https://api.github.com/users/SauravMaheshkar/followers",
"following_url": "https://api.github.com/users/SauravMaheshkar/following{/other_user}",
"gists_url": "https://api.github.com/users/SauravMaheshkar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SauravMaheshkar",
"id": 61241031,
"login": "SauravMaheshkar",
"node_id": "MDQ6VXNlcjYxMjQxMDMx",
"organizations_url": "https://api.github.com/users/SauravMaheshkar/orgs",
"received_events_url": "https://api.github.com/users/SauravMaheshkar/received_events",
"repos_url": "https://api.github.com/users/SauravMaheshkar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SauravMaheshkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SauravMaheshkar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SauravMaheshkar",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | [
"Hey @SauravMaheshkar ! Just responded to your email.\r\n\r\n_For transparency, copying part of my response here:_\r\nI agree, it would be really great to have this and other BenchMD datasets easily accessible on the hub.\r\n\r\nI think the main limiting factor is that the ChexPert dataset is currently hosted on the Stanford AIMI Shared Datasets website, with a license that does not permit redistribution IIRC. Thus, I believe we would need to create a [dataset loading script](https://huggingface.co/docs/datasets/image_dataset#loading-script) that would check authentication with the Stanford AIMI site before downloading and extracting the data. \r\n\r\nI've started a HF dataset repo [here](https://huggingface.co/datasets/katielink/CheXpert), in case you want to collaborate on writing up this loading script! I'm also happy to take a stab when I have some more time next week.",
"Hey @katielink I would love to try this out. Please guide me.",
"Hi @katielink , I would also love to be on board and contribute to this loading script/project if it is still being developed. I'm interested because I personally would like to gain access to the CheXpert dataset and am facing some weird issues, so I'd like to sort it out for me, and potentially others. Please keep me updated and guide me on this as well!!!"
] | 1970-01-01T00:00:00.000001 | 1,704 | null | NONE | null | ### Feature request
### Name
**CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison**
### Paper
https://arxiv.org/abs/1901.07031
### Data
https://stanfordaimi.azurewebsites.net/datasets/8cbd9ed4-2eb9-4565-affc-111cf4f7ebe2
### Motivation
CheXpert is one of the fundamental models in medical image classification and can serve as a viable pre-training dataset for radiology classification or low-scale ablation / exploratory studies.
This could also serve as a good pre-training dataset for Kaggle competitions.
### Your contribution
Would love to make a PR and pre-process / get this into 🤗 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6382/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6382/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6377/comments | https://api.github.com/repos/huggingface/datasets/issues/6377/events | https://github.com/huggingface/datasets/issues/6377 | 1,973,937,612 | I_kwDODunzps51p-XM | 6,377 | Support pyarrow 14.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,698 | 1970-01-01T00:00:00.000001 | MEMBER | null | Support pyarrow 14.0.0 by fixing the root cause of:
- #6374
and revert:
- #6375 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6377/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6377/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6376/comments | https://api.github.com/repos/huggingface/datasets/issues/6376/events | https://github.com/huggingface/datasets/issues/6376 | 1,973,927,468 | I_kwDODunzps51p74s | 6,376 | Caching problem when deleting a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! Can you also share the error message printed in step 5?",
"I did not store it at the time but I'll try to re-do a mwe next week to get it again",
"I haven't managed to reproduce this issue using a [notebook](https://colab.research.google.com/drive/1m6eduYun7pFTkigrCJAFgw0BghlbvXIL?usp=sharing) that follows the steps to reproduce the bug. So, I'm closing it.\r\n\r\nBut feel free to re-open it if you have a better reproducer."
] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | MEMBER | null | ### Describe the bug
Pushing a dataset with n + m features to a repo which was deleted, but contained n features, will fail.
### Steps to reproduce the bug
1. Create a dataset with n features per row
2. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)`
3. Go on the hub, delete the repo at `YOUR_PATH`
4. Update your local dataset to have n + m features per row
5. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)` will fail because of a mismatch in features number
### Expected behavior
Step 5 should work or display a message to indicate the cache has not been cleared
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.16.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6376/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6376/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6374/comments | https://api.github.com/repos/huggingface/datasets/issues/6374/events | https://github.com/huggingface/datasets/issues/6374 | 1,973,857,428 | I_kwDODunzps51pqyU | 6,374 | CI is broken: TypeError: Couldn't cast array | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,698 | 1970-01-01T00:00:00.000001 | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/6730567226/job/18293518039
```
FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[3]
to
Sequence(feature=Value(dtype='int64', id=None), length=3, id=None)
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6374/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6374/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6371/comments | https://api.github.com/repos/huggingface/datasets/issues/6371/events | https://github.com/huggingface/datasets/issues/6371 | 1,972,807,579 | I_kwDODunzps51lqeb | 6,371 | `Dataset.from_generator` should not try to download from HF GCS | {
"avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4",
"events_url": "https://api.github.com/users/yundai424/events{/privacy}",
"followers_url": "https://api.github.com/users/yundai424/followers",
"following_url": "https://api.github.com/users/yundai424/following{/other_user}",
"gists_url": "https://api.github.com/users/yundai424/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yundai424",
"id": 43726198,
"login": "yundai424",
"node_id": "MDQ6VXNlcjQzNzI2MTk4",
"organizations_url": "https://api.github.com/users/yundai424/orgs",
"received_events_url": "https://api.github.com/users/yundai424/received_events",
"repos_url": "https://api.github.com/users/yundai424/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yundai424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yundai424/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yundai424",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Indeed, setting `try_from_gcs` to `False` makes sense for `from_generator`.\r\n\r\nWe plan to deprecate and remove `try_from_hf_gcs` soon, as we can use Hub for file hosting now, but this is a good temporary fix.\r\n"
] | 1970-01-01T00:00:00.000001 | 1,698 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
When using [`Dataset.from_generator`](https://github.com/huggingface/datasets/blob/c9c1166e1cf81d38534020f9c167b326585339e5/src/datasets/arrow_dataset.py#L1072) with `streaming=False`, the internal logic will call [`download_and_prepare`](https://github.com/huggingface/datasets/blob/main/src/datasets/io/generator.py#L47) which will attempt to download from HF GCS which is redundant, because user has already provided the generator from which the data should be drawn.
If someone attempts to call `Dataset.from_generator` from an environment that doesn't have external internet access (for example internal production machine) and doesn't set `HF_DATASETS_OFFLINE=1`, this will result in process being stuck at building connection.
### Steps to reproduce the bug
```python
import datasets
def gen():
for _ in range(100):
yield {"text": "dummy text"}
dataset = datasets.Dataset.from_generator(gen)
```
A minimum example executed on any environment that doesn't have access to HF GCS can result in the error
### Expected behavior
`try_from_hf_gcs` should be set to False here https://github.com/huggingface/datasets/blob/c9c1166e1cf81d38534020f9c167b326585339e5/src/datasets/io/generator.py#L51
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.12
- Huggingface_hub version: 0.17.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6371/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6371/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6370/comments | https://api.github.com/repos/huggingface/datasets/issues/6370/events | https://github.com/huggingface/datasets/issues/6370 | 1,972,073,909 | I_kwDODunzps51i3W1 | 6,370 | TensorDataset format does not work with Trainer from transformers | {
"avatar_url": "https://avatars.githubusercontent.com/u/49014051?v=4",
"events_url": "https://api.github.com/users/jinzzasol/events{/privacy}",
"followers_url": "https://api.github.com/users/jinzzasol/followers",
"following_url": "https://api.github.com/users/jinzzasol/following{/other_user}",
"gists_url": "https://api.github.com/users/jinzzasol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jinzzasol",
"id": 49014051,
"login": "jinzzasol",
"node_id": "MDQ6VXNlcjQ5MDE0MDUx",
"organizations_url": "https://api.github.com/users/jinzzasol/orgs",
"received_events_url": "https://api.github.com/users/jinzzasol/received_events",
"repos_url": "https://api.github.com/users/jinzzasol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jinzzasol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinzzasol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jinzzasol",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I figured it out. I found that `Trainer` does not work with TensorDataset even though the document says it uses it. Instead, I ended up creating a dictionary and converting it to a dataset using `dataset.Dataset.from_dict()`.\r\n\r\nI will leave this post open for a while. If someone knows a better approach, please leave a comment.",
"Only issues directly related to the HF datasets library should be reported here. ~So, I'm transferring this issue to the `transformers` repo.~ I'm not a `transformers` maintainer, so GitHub doesn't let me transfer it there :(. This means you need to do it manually."
] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The model was built to do fine tunning on BERT model for relation extraction.
trainer.train() returns an error message ```TypeError: vars() argument must have __dict__ attribute``` when it has `train_dataset` generated from `torch.utils.data.TensorDataset`
However, in the document, the required data format is `torch.utils.data.TensorDataset`.

Transformers trainer is supposed to accept the train_dataset in the format of torch.utils.data.TensorDataset, but it returns error message *"TypeError: vars() argument must have __dict__ attribute"*
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-30-5df728c929a2> in <cell line: 1>()
----> 1 trainer.train()
2 trainer.evaluate(test_dataset)
9 frames
/usr/local/lib/python3.10/dist-packages/transformers/data/data_collator.py in <listcomp>(.0)
107
108 if not isinstance(features[0], Mapping):
--> 109 features = [vars(f) for f in features]
110 first = features[0]
111 batch = {}
TypeError: vars() argument must have __dict__ attribute
```
### Steps to reproduce the bug
Create train_dataset using `torch.utils.data.TensorDataset`, for instance,
```train_dataset = torch.utils.data.TensorDataset(train_input_ids, train_attention_masks, train_labels)```
Feed this `train_dataset` to your trainer and run trainer.train
```
trainer = Trainer(model,
training_args,
train_dataset=train_dataset,
eval_dataset=dev_dataset,
compute_metrics=compute_metrics,
)
```
### Expected behavior
Trainer should start training
### Environment info
It is running on Google Colab
- `datasets` version: 2.14.6
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6370/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6370/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6369/comments | https://api.github.com/repos/huggingface/datasets/issues/6369/events | https://github.com/huggingface/datasets/issues/6369 | 1,971,794,108 | I_kwDODunzps51hzC8 | 6,369 | Multi process map did not load cache file correctly | {
"avatar_url": "https://avatars.githubusercontent.com/u/14285786?v=4",
"events_url": "https://api.github.com/users/enze5088/events{/privacy}",
"followers_url": "https://api.github.com/users/enze5088/followers",
"following_url": "https://api.github.com/users/enze5088/following{/other_user}",
"gists_url": "https://api.github.com/users/enze5088/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/enze5088",
"id": 14285786,
"login": "enze5088",
"node_id": "MDQ6VXNlcjE0Mjg1Nzg2",
"organizations_url": "https://api.github.com/users/enze5088/orgs",
"received_events_url": "https://api.github.com/users/enze5088/received_events",
"repos_url": "https://api.github.com/users/enze5088/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/enze5088/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enze5088/subscriptions",
"type": "User",
"url": "https://api.github.com/users/enze5088",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The inconsistency may be caused by the usage of \"update_fingerprint\" and setting \"trust_remote_code\" to \"True.\"\r\nWhen the tokenizer employs \"trust_remote_code,\" the behavior of the map function varies with each code execution. Even if the remote code of the tokenizer remains the same, the result of \"asher.hexdigest()\" is found to be inconsistent each time.\r\nThis may result in different processes executing multiple maps\r\n\r\n\r\n\r\n",
"The issue may be related to problems previously discussed in GitHub issues [#3847](https://github.com/huggingface/datasets/issues/3847) and [#6318](https://github.com/huggingface/datasets/pull/6318). \r\nThis arises from the fact that tokenizer.tokens_trie._tokens is an unordered set, leading to varying hash results:\r\n`value = hash_bytes(dumps(tokenizer.tokens_trie._tokens))`\r\nConsequently, this results in different outcomes each time for:\r\n`new_fingerprint = update_fingerprint(datasets._fingerprint, transform, kwargs_for_fingerprint)`\r\n\r\nTo address this issue, it's essential to make `Trie._tokens` a deterministic set while ensuring a consistent order after the final update of `_tokens`.\r\n",
"We now sort `set` and `dict` items to make their hashes deterministic (install from `main` with `pip install git+https://github.com/huggingface/datasets` to test this). Consequently, this should also make the `tokenizer.tokens_trie`'s hash deterministic. Feel free to re-open the issue if this is not the case."
] | 1970-01-01T00:00:00.000001 | 1,701 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When I was training model on Multiple GPUs by DDP, the dataset is tokenized multiple times after main process.


Code is modified from [run_clm.py](https://github.com/huggingface/transformers/blob/7d8ff3629b2725ec43ace99c1a6e87ac1978d433/examples/pytorch/language-modeling/run_clm.py#L484)
### Steps to reproduce the bug
```
block_size = data_args.block_size
IGNORE_INDEX = -100
Ignore_Input = False
def tokenize_function(examples):
sources = []
targets = []
for instruction, inputs, output in zip(examples['instruction'], examples['input'], examples['output']):
source = instruction + inputs
target = f"{output}{tokenizer.eos_token}"
sources.append(source)
targets.append(target)
tokenized_sources = tokenizer(sources, return_attention_mask=False)
tokenized_targets = tokenizer(targets, return_attention_mask=False,
add_special_tokens=False
)
all_input_ids = []
all_labels = []
for s, t in zip(tokenized_sources['input_ids'], tokenized_targets['input_ids']):
if len(s) > block_size and Ignore_Input == False:
# print(s)
continue
input_ids = torch.LongTensor(s + t)[:block_size]
if Ignore_Input:
labels = torch.LongTensor([IGNORE_INDEX] * len(s) + t)[:block_size]
else:
labels = input_ids
assert len(input_ids) == len(labels)
all_input_ids.append(input_ids)
all_labels.append(labels)
results = {
'input_ids': all_input_ids,
'labels': all_labels,
}
return results
with training_args.main_process_first(desc="dataset map tokenization ", local=False):
# print('local_rank',training_args.local_rank)
if not data_args.streaming:
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
desc="Running tokenizer on dataset ",
)
else:
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
remove_columns=column_names,
desc="Running tokenizer on dataset "
)
```
### Expected behavior
This code should only tokenize the dataset in the main process, and the other processes load the dataset after waiting
### Environment info
transformers == 4.34.1
datasets == 2.14.5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6369/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6369/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6366/comments | https://api.github.com/repos/huggingface/datasets/issues/6366/events | https://github.com/huggingface/datasets/issues/6366 | 1,970,213,490 | I_kwDODunzps51bxJy | 6,366 | with_format() function returns bytes instead of PIL images even when image column is not part of "columns" | {
"avatar_url": "https://avatars.githubusercontent.com/u/17809020?v=4",
"events_url": "https://api.github.com/users/leot13/events{/privacy}",
"followers_url": "https://api.github.com/users/leot13/followers",
"following_url": "https://api.github.com/users/leot13/following{/other_user}",
"gists_url": "https://api.github.com/users/leot13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leot13",
"id": 17809020,
"login": "leot13",
"node_id": "MDQ6VXNlcjE3ODA5MDIw",
"organizations_url": "https://api.github.com/users/leot13/orgs",
"received_events_url": "https://api.github.com/users/leot13/received_events",
"repos_url": "https://api.github.com/users/leot13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leot13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leot13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leot13",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix."
] | 1970-01-01T00:00:00.000001 | 1,698 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When using the with_format() function on a dataset containing images, even if the image column is not part of the columns provided in the function, its type will be changed to bytes.
Here is a minimal reproduction of the bug:
https://colab.research.google.com/drive/1hyaOspgyhB41oiR1-tXE3k_gJCdJUQCf?usp=sharing
### Steps to reproduce the bug
1. Load the image dataset
2. apply with_format(columns=["text"])
3. Check the type of images in the "image" column before and after applying with_format
### Expected behavior
The type should stay the same, but it does not
### Environment info
datasets==2.14.6
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6366/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6365/comments | https://api.github.com/repos/huggingface/datasets/issues/6365/events | https://github.com/huggingface/datasets/issues/6365 | 1,970,140,392 | I_kwDODunzps51bfTo | 6,365 | Parquet size grows exponential for categorical data | {
"avatar_url": "https://avatars.githubusercontent.com/u/82567957?v=4",
"events_url": "https://api.github.com/users/aseganti/events{/privacy}",
"followers_url": "https://api.github.com/users/aseganti/followers",
"following_url": "https://api.github.com/users/aseganti/following{/other_user}",
"gists_url": "https://api.github.com/users/aseganti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aseganti",
"id": 82567957,
"login": "aseganti",
"node_id": "MDQ6VXNlcjgyNTY3OTU3",
"organizations_url": "https://api.github.com/users/aseganti/orgs",
"received_events_url": "https://api.github.com/users/aseganti/received_events",
"repos_url": "https://api.github.com/users/aseganti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aseganti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseganti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aseganti",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Wrong repo."
] | 1970-01-01T00:00:00.000001 | 1,698 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
It seems that when saving a data frame with a categorical column inside the size can grow exponentially.
This seems to happen because when we save the categorical data to parquet, we are saving the data + all the categories existing in the original data. This happens even when the categories are not present in the original data.
### Steps to reproduce the bug
To reproduce the bug, it is enough to run this script:
```
import pandas as pd
import os
if __name__ == "__main__":
for n in [10, 1e2, 1e3, 1e4, 1e5]:
for n_col in [1, 10, 100, 1000, 10000]:
input = pd.DataFrame([{"{i}": f"{i}_cat" for col in range(n_col)} for i in range(int(n))])
input.iloc[0:100].to_parquet("a.parquet")
for col in input.columns:
input[col] = input[col].astype("category")
input.iloc[0:100].to_parquet("b.parquet")
a_size_mb = os.stat("a.parquet").st_size / (1024 * 1024)
b_size_mb = os.stat("b.parquet").st_size / (1024 * 1024)
print(f"{n} {n_col} {a_size_mb} {b_size_mb} {100*b_size_mb/a_size_mb:.2f}")
```
That produces this output:
<img width="464" alt="Screenshot 2023-10-31 at 11 25 25" src="https://github.com/huggingface/datasets/assets/82567957/2b8a9284-7f9e-4c10-a006-0a27236ebd15">
### Expected behavior
In my opinion either:
1. The two file should have (almost) the same size
2. There should be warning telling the user that such difference in size is possible
### Environment info
Python 3.8.18
pandas==2.0.3
numpy==1.24.4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/82567957?v=4",
"events_url": "https://api.github.com/users/aseganti/events{/privacy}",
"followers_url": "https://api.github.com/users/aseganti/followers",
"following_url": "https://api.github.com/users/aseganti/following{/other_user}",
"gists_url": "https://api.github.com/users/aseganti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aseganti",
"id": 82567957,
"login": "aseganti",
"node_id": "MDQ6VXNlcjgyNTY3OTU3",
"organizations_url": "https://api.github.com/users/aseganti/orgs",
"received_events_url": "https://api.github.com/users/aseganti/received_events",
"repos_url": "https://api.github.com/users/aseganti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aseganti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseganti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aseganti",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6365/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6365/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6364/comments | https://api.github.com/repos/huggingface/datasets/issues/6364/events | https://github.com/huggingface/datasets/issues/6364 | 1,969,136,106 | I_kwDODunzps51XqHq | 6,364 | ArrowNotImplementedError: Unsupported cast from string to list using function cast_list | {
"avatar_url": "https://avatars.githubusercontent.com/u/32887094?v=4",
"events_url": "https://api.github.com/users/divyakrishna-devisetty/events{/privacy}",
"followers_url": "https://api.github.com/users/divyakrishna-devisetty/followers",
"following_url": "https://api.github.com/users/divyakrishna-devisetty/following{/other_user}",
"gists_url": "https://api.github.com/users/divyakrishna-devisetty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/divyakrishna-devisetty",
"id": 32887094,
"login": "divyakrishna-devisetty",
"node_id": "MDQ6VXNlcjMyODg3MDk0",
"organizations_url": "https://api.github.com/users/divyakrishna-devisetty/orgs",
"received_events_url": "https://api.github.com/users/divyakrishna-devisetty/received_events",
"repos_url": "https://api.github.com/users/divyakrishna-devisetty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/divyakrishna-devisetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyakrishna-devisetty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/divyakrishna-devisetty",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"You can use the following code to load this CSV with the list values preserved:\r\n```python\r\nfrom datasets import load_dataset\r\nimport ast\r\n\r\nconverters = {\r\n \"contexts\" : ast.literal_eval,\r\n \"ground_truths\" : ast.literal_eval,\r\n}\r\n\r\nds = load_dataset(\"csv\", data_files=\"golden_dataset.csv\", converters=converters)\r\n```",
"Thank you! it worked :)"
] | 1970-01-01T00:00:00.000001 | 1,698 | 1970-01-01T00:00:00.000001 | NONE | null | Hi,
I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue.
CSV Data sample(golden_dataset.csv):
Question | Context | answer | groundtruth
"what is abc?" | "abc is this and that" | "abc is this " | "abc is this and that"
```
import csv
# built it based on https://huggingface.co/datasets/explodinggradients/fiqa/viewer/ragas_eval?row=0
mydict = [
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]},
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]},
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]}
]
fields = ['question', 'contexts', 'answer', 'ground_truths']
with open('golden_dataset.csv', 'w', newline='\n') as file:
writer = csv.DictWriter(file, fieldnames = fields)
writer.writeheader()
for row in mydict:
writer.writerow(row)
```
Retrieved dataset:
DatasetDict({
train: Dataset({
features: ['question', 'contexts', 'answer', 'ground_truths'],
num_rows: 1
})
})
Code to reproduce issue:
```
from datasets import load_dataset, Features, Sequence, Value
encode_features = Features(
{
"question": Value(dtype='string', id=0),
"contexts": Sequence(feature=Value(dtype='string', id=1)),
"answer": Value(dtype='string', id=2),
"ground_truths": Sequence(feature=Value(dtype='string',id=3)),
}
)
eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv', features = encode_features )
```
Error trace:
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1925, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1924 _time = time.time()
-> 1925 for _, table in generator:
1926 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:192, in Csv._generate_tables(self, files)
189 # Uncomment for debugging (will print the Arrow table size and elements)
190 # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}")
191 # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))
--> 192 yield (file_idx, batch_idx), self._cast_table(pa_table)
193 except ValueError as e:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:167, in Csv._cast_table(self, pa_table)
165 if all(not require_storage_cast(feature) for feature in self.config.features.values()):
166 # cheaper cast
--> 167 pa_table = pa.Table.from_arrays([pa_table[field.name] for field in schema], schema=schema)
168 else:
169 # more expensive cast; allows str <-> int/float or str to Audio for example
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:3781, in pyarrow.lib.Table.from_arrays()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:1449, in pyarrow.lib._sanitize_arrays()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/array.pxi:354, in pyarrow.lib.asarray()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:551, in pyarrow.lib.ChunkedArray.cast()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/compute.py:400, in cast(arr, target_type, safe, options, memory_pool)
399 options = CastOptions.safe(target_type)
--> 400 return call_function("cast", [arr], options, memory_pool)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:572, in pyarrow._compute.call_function()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:367, in pyarrow._compute.Function.call()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[57], line 1
----> 1 eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv', features = encode_features )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/load.py:2153, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
2156 verification_mode=verification_mode,
2157 try_from_hf_gcs=try_from_hf_gcs,
2158 num_proc=num_proc,
2159 storage_options=storage_options,
2160 )
2162 # Build dataset for splits
2163 keep_in_memory = (
2164 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2165 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
957 **prepare_split_kwargs,
958 **download_and_prepare_kwargs,
959 )
960 # Sync info
961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1049, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1045 split_dict.add(split_generator.split_info)
1047 try:
1048 # Prepare split will record examples associated to the split
-> 1049 self._prepare_split(split_generator, **prepare_split_kwargs)
1050 except OSError as e:
1051 raise OSError(
1052 "Cannot find data file. "
1053 + (self.manual_download_instructions or "")
1054 + "\nOriginal error:\n"
1055 + str(e)
1056 ) from None
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1813, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1811 job_id = 0
1812 with pbar:
-> 1813 for job_id, done, content in self._prepare_split_single(
1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1815 ):
1816 if done:
1817 result = content
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1958, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1957 e = e.__context__
-> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
Environment Info:
datasets version: 2.14.5
Python version: 3.10.8
PyArrow version: 12.0.1
Pandas version: 2.0.3
I have also tried to load dataset first and then use cast_column, or save_to_disk and load_from_disk. | {
"avatar_url": "https://avatars.githubusercontent.com/u/32887094?v=4",
"events_url": "https://api.github.com/users/divyakrishna-devisetty/events{/privacy}",
"followers_url": "https://api.github.com/users/divyakrishna-devisetty/followers",
"following_url": "https://api.github.com/users/divyakrishna-devisetty/following{/other_user}",
"gists_url": "https://api.github.com/users/divyakrishna-devisetty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/divyakrishna-devisetty",
"id": 32887094,
"login": "divyakrishna-devisetty",
"node_id": "MDQ6VXNlcjMyODg3MDk0",
"organizations_url": "https://api.github.com/users/divyakrishna-devisetty/orgs",
"received_events_url": "https://api.github.com/users/divyakrishna-devisetty/received_events",
"repos_url": "https://api.github.com/users/divyakrishna-devisetty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/divyakrishna-devisetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyakrishna-devisetty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/divyakrishna-devisetty",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6364/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6364/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6363/comments | https://api.github.com/repos/huggingface/datasets/issues/6363/events | https://github.com/huggingface/datasets/issues/6363 | 1,968,891,277 | I_kwDODunzps51WuWN | 6,363 | dataset.transform() hangs indefinitely while finetuning the stable diffusion XL | {
"avatar_url": "https://avatars.githubusercontent.com/u/10846405?v=4",
"events_url": "https://api.github.com/users/bhosalems/events{/privacy}",
"followers_url": "https://api.github.com/users/bhosalems/followers",
"following_url": "https://api.github.com/users/bhosalems/following{/other_user}",
"gists_url": "https://api.github.com/users/bhosalems/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhosalems",
"id": 10846405,
"login": "bhosalems",
"node_id": "MDQ6VXNlcjEwODQ2NDA1",
"organizations_url": "https://api.github.com/users/bhosalems/orgs",
"received_events_url": "https://api.github.com/users/bhosalems/received_events",
"repos_url": "https://api.github.com/users/bhosalems/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhosalems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhosalems/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhosalems",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I think the code hangs on the `accelerator.main_process_first()` context manager exit. To verify this, you can append a print statement to the end of the `accelerator.main_process_first()` block. \r\n\r\n\r\nIf the problem is in `with_transform`, it would help if you could share the error stack trace printed when you interrupt the process (while it hangs)",
"@bhosalems Were you able to fix that ? I face this issue as well",
"@matankley No I am not able to resolve this issue yet.",
"@mariosasko yes the problem seems to be to exit from accelerator.main_process_first(). Is there any known problem?",
"NCCL debug info I get below output, if it helps.\r\n```\r\n11/09/2023 13:36:44 - INFO - __main__ - Distributed environment: MULTI_GPU Backend: nccl\r\nNum processes: 2\r\nProcess index: 1\r\nLocal process index: 1\r\nDevice: cuda:1\r\n\r\nMixed precision type: fp16\r\n\r\nDetected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.\r\n11/09/2023 13:36:44 - INFO - __main__ - Distributed environment: MULTI_GPU Backend: nccl\r\nNum processes: 2\r\nProcess index: 0\r\nLocal process index: 0\r\nDevice: cuda:0\r\n\r\nMixed precision type: fp16\r\n\r\n{'timestep_spacing', 'thresholding', 'variance_type', 'clip_sample_range', 'prediction_type', 'dynamic_thresholding_ratio', 'sample_max_value'} was not found in config. Values will be initialized to default values.\r\n{'norm_num_groups', 'force_upcast'} was not found in config. Values will be initialized to default values.\r\n{'num_attention_heads', 'projection_class_embeddings_input_dim', 'addition_embed_type_num_heads', 'mid_block_only_cross_attention', 'addition_embed_type', 'num_class_embeds', 'upcast_attention', 'cross_attention_norm', 'addition_time_embed_dim', 'time_embedding_dim', 'class_embeddings_concat', 'encoder_hid_dim', 'encoder_hid_dim_type', 'resnet_out_scale_factor', 'attention_type', 'conv_out_kernel', 'only_cross_attention', 'resnet_time_scale_shift', 'resnet_skip_time_act', 'reverse_transformer_layers_per_block', 'conv_in_kernel', 'time_cond_proj_dim', 'use_linear_projection', 'mid_block_type', 'time_embedding_act_fn', 'dropout', 'timestep_post_act', 'dual_cross_attention', 'class_embed_type', 'transformer_layers_per_block', 'time_embedding_type'} was not found in config. Values will be initialized to default values.\r\n{'num_attention_heads', 'projection_class_embeddings_input_dim', 'addition_embed_type_num_heads', 'mid_block_only_cross_attention', 'addition_embed_type', 'num_class_embeds', 'upcast_attention', 'cross_attention_norm', 'addition_time_embed_dim', 'time_embedding_dim', 'class_embeddings_concat', 'encoder_hid_dim', 'encoder_hid_dim_type', 'resnet_out_scale_factor', 'attention_type', 'conv_out_kernel', 'only_cross_attention', 'resnet_time_scale_shift', 'resnet_skip_time_act', 'reverse_transformer_layers_per_block', 'conv_in_kernel', 'time_cond_proj_dim', 'use_linear_projection', 'mid_block_type', 'time_embedding_act_fn', 'dropout', 'timestep_post_act', 'dual_cross_attention', 'class_embed_type', 'transformer_layers_per_block', 'time_embedding_type'} was not found in config. Values will be initialized to default values.\r\ndeepbull5:1311249:1311249 [0] NCCL INFO Bootstrap : Using enp194s0f0:128.205.43.171<0>\r\ndeepbull5:1311249:1311249 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation\r\ndeepbull5:1311249:1311249 [0] NCCL INFO cudaDriverVersion 11070\r\nNCCL version 2.14.3+cuda11.7\r\ndeepbull5:1311250:1311250 [1] NCCL INFO cudaDriverVersion 11070\r\ndeepbull5:1311249:1311365 [0] NCCL INFO NET/IB : No device found.\r\ndeepbull5:1311249:1311365 [0] NCCL INFO NET/Socket : Using [0]enp194s0f0:128.205.43.171<0>\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Using network Socket\r\ndeepbull5:1311250:1311250 [1] NCCL INFO Bootstrap : Using enp194s0f0:128.205.43.171<0>\r\ndeepbull5:1311250:1311250 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation\r\ndeepbull5:1311250:1311366 [1] NCCL INFO NET/IB : No device found.\r\ndeepbull5:1311250:1311366 [1] NCCL INFO NET/Socket : Using [0]enp194s0f0:128.205.43.171<0>\r\ndeepbull5:1311250:1311366 [1] NCCL INFO Using network Socket\r\ndeepbull5:1311250:1311366 [1] NCCL INFO Setting affinity for GPU 1 to ff,ffff0000,00ffffff\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Setting affinity for GPU 0 to ff,ffff0000,00ffffff\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Channel 00/04 : 0 1\r\ndeepbull5:1311250:1311366 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Channel 01/04 : 0 1\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Channel 02/04 : 0 1\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Channel 03/04 : 0 1\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Channel 00/0 : 0[1000] -> 1[24000] via P2P/IPC\r\ndeepbull5:1311250:1311366 [1] NCCL INFO Channel 00/0 : 1[24000] -> 0[1000] via P2P/IPC\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Channel 01/0 : 0[1000] -> 1[24000] via P2P/IPC\r\ndeepbull5:1311250:1311366 [1] NCCL INFO Channel 01/0 : 1[24000] -> 0[1000] via P2P/IPC\r\ndeepbull5:1311250:1311366 [1] NCCL INFO Channel 02/0 : 1[24000] -> 0[1000] via P2P/IPC\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Channel 02/0 : 0[1000] -> 1[24000] via P2P/IPC\r\ndeepbull5:1311250:1311366 [1] NCCL INFO Channel 03/0 : 1[24000] -> 0[1000] via P2P/IPC\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Channel 03/0 : 0[1000] -> 1[24000] via P2P/IPC\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Connected all rings\r\ndeepbull5:1311249:1311365 [0] NCCL INFO Connected all trees\r\ndeepbull5:1311249:1311365 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512\r\ndeepbull5:1311249:1311365 [0] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer\r\ndeepbull5:1311250:1311366 [1] NCCL INFO Connected all rings\r\ndeepbull5:1311250:1311366 [1] NCCL INFO Connected all trees\r\ndeepbull5:1311250:1311366 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512\r\ndeepbull5:1311250:1311366 [1] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer\r\ndeepbull5:1311249:1311365 [0] NCCL INFO comm 0x88a84ee0 rank 0 nranks 2 cudaDev 0 busId 1000 - Init COMPLETE\r\ndeepbull5:1311250:1311366 [1] NCCL INFO comm 0x89a42f60 rank 1 nranks 2 cudaDev 1 busId 24000 - Init COMPLETE\r\n\r\n```",
"Maybe @muellerzr can help as an `accelerate` maintainer.",
"I don't know what the issue was, but after going through the thread here I loved the issue with https://github.com/huggingface/accelerate/issues/314#issuecomment-1565259831"
] | 1970-01-01T00:00:00.000001 | 1,700 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Multi-GPU fine-tuning the stable diffusion X by following https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/README_sdxl.md hangs indefinitely.
### Steps to reproduce the bug
accelerate launch train_text_to_image_sdxl.py --pretrained_model_name_or_path=$MODEL_NAME --pretrained_vae_model_name_or_path=$VAE_NAME --dataset_name=$DATASET_NAME --enable_xformers_memory_efficient_attention --resolution=512 --center_crop --random_flip --proportion_empty_prompts=0.2 --train_batch_size=1 --gradient_accumulation_steps=4 --gradient_checkpointing --max_train_steps=10000 --use_8bit_adam --learning_rate=1e-06 --lr_scheduler="constant" --lr_warmup_steps=0 --mixed_precision="fp16" --report_to="wandb" --validation_prompt="a cute Sundar Pichai creature" --validation_epochs 5 --checkpointing_steps=5000 --output_dir="sdxl-pokemon-model"
### Expected behavior
It should start the training as it does for the single GPU training. I opened the issue in diffusers **https://github.com/huggingface/diffusers/issues/5534 but it does seem to be an issue with the Pokemon dataset.
I added some debug prints
```
print("==========HERE3=============")
with accelerator.main_process_first():
print(accelerator.is_main_process)
print("===========Here3.1===========")
if args.max_train_samples is not None:
dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
print("===========Here3.2===========")
# Set the training transforms
train_dataset = dataset["train"].with_transform(preprocess_train)
print("==========HERE4=============")
Corresponding Output
Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
10/25/2023 21:18:04 - INFO - main - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 3
Process index: 1
Local process index: 1
Device: cuda:1
Mixed precision type: fp16
10/25/2023 21:18:04 - INFO - main - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 3
Process index: 2
Local process index: 2
Device: cuda:2
Mixed precision type: fp16
10/25/2023 21:18:04 - INFO - main - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 3
Process index: 0
Local process index: 0
Device: cuda:0
Mixed precision type: fp16
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{‘variance_type’, ‘clip_sample_range’, ‘thresholding’, ‘dynamic_thresholding_ratio’} was not found in config. Values will be initialized to default values.
{‘attention_type’, ‘reverse_transformer_layers_per_block’, ‘dropout’} was not found in config. Values will be initialized to default values.
==========HERE1=============
==========HERE1=============
==========HERE1=============
==========HERE2=============
==========HERE2=============
==========HERE2=============
==========HERE3=============
True
===========Here3.1===========
===========Here3.2===========
==========HERE3=============
==========HERE3=========
```
### Environment info
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_kmp_llvm conda-forge
absl-py 2.0.0 pypi_0 pypi
accelerate 0.24.0 pypi_0 pypi
aiohttp 3.8.6 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
appdirs 1.4.4 pyh9f0ad1d_0 conda-forge
async-timeout 4.0.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
bitsandbytes 0.41.1 pypi_0 pypi
blas 1.0 mkl
blessings 1.7 py39h06a4308_1002
brotli-python 1.0.9 py39h6a678d5_7
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.08.22 h06a4308_0
cachetools 5.3.2 pypi_0 pypi
certifi 2023.7.22 py39h06a4308_0
cffi 1.15.1 py39h5eee18b_3
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.1.7 unix_pyh707e725_0 conda-forge
cryptography 41.0.3 py39hdda0065_0
cuda-cudart 11.7.99 0 nvidia
cuda-cupti 11.7.101 0 nvidia
cuda-libraries 11.7.1 0 nvidia
cuda-nvrtc 11.7.99 0 nvidia
cuda-nvtx 11.7.91 0 nvidia
cuda-runtime 11.7.1 0 nvidia
datasets 2.14.6 pypi_0 pypi
diffusers 0.22.0.dev0 pypi_0 pypi
dill 0.3.7 pypi_0 pypi
docker-pycreds 0.4.0 py_0 conda-forge
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.12.4 pypi_0 pypi
freetype 2.12.1 h4a9f257_0
frozenlist 1.4.0 pypi_0 pypi
fsspec 2023.10.0 pypi_0 pypi
ftfy 6.1.1 pypi_0 pypi
giflib 5.2.1 h5eee18b_3
gitdb 4.0.11 pyhd8ed1ab_0 conda-forge
gitpython 3.1.40 pyhd8ed1ab_0 conda-forge
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
google-auth 2.23.3 pypi_0 pypi
google-auth-oauthlib 1.1.0 pypi_0 pypi
gpustat 0.6.0 pyhd3eb1b0_1
grpcio 1.59.0 pypi_0 pypi
huggingface-hub 0.17.3 pypi_0 pypi
idna 3.4 py39h06a4308_0
importlib-metadata 6.8.0 pypi_0 pypi
intel-openmp 2023.1.0 hdb19cb5_46305
jinja2 3.1.2 pypi_0 pypi
jpeg 9e h5eee18b_1
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libcublas 11.10.3.66 0 nvidia
libcufft 10.7.2.124 h4fbf590_0 nvidia
libcufile 1.8.0.34 0 nvidia
libcurand 10.3.4.52 0 nvidia
libcusolver 11.4.0.1 0 nvidia
libcusparse 11.7.4.91 0 nvidia
libdeflate 1.17 h5eee18b_1
libffi 3.4.4 h6a678d5_0
libgcc-ng 13.2.0 h807b86a_2 conda-forge
libgfortran-ng 13.2.0 h69a702a_2 conda-forge
libgfortran5 13.2.0 ha4646dd_2 conda-forge
libiconv 1.16 h7f8727e_2
libidn2 2.3.4 h5eee18b_0
libnpp 11.7.4.75 0 nvidia
libnvjpeg 11.8.0.2 0 nvidia
libpng 1.6.39 h5eee18b_0
libprotobuf 3.20.3 he621ea3_0
libstdcxx-ng 13.2.0 h7e041cc_2 conda-forge
libtasn1 4.19.0 h5eee18b_0
libtiff 4.5.1 h6a678d5_0
libunistring 0.9.10 h27cfd23_0
libwebp 1.3.2 h11a3e52_0
libwebp-base 1.3.2 h5eee18b_0
llvm-openmp 14.0.6 h9e868ea_0
lz4-c 1.9.4 h6a678d5_0
markdown 3.5 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
mkl 2023.1.0 h213fc3f_46343
mkl-service 2.4.0 py39h5eee18b_1
mkl_fft 1.3.8 py39h5eee18b_0
mkl_random 1.2.4 py39hdb19cb5_0
multidict 6.0.4 pypi_0 pypi
multiprocess 0.70.15 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nettle 3.7.3 hbbd107a_1
numpy 1.26.0 py39h5f9d8c6_0
numpy-base 1.26.0 py39hb5e798b_0
nvidia-ml 7.352.0 pyhd3eb1b0_0
oauthlib 3.2.2 pypi_0 pypi
openh264 2.1.1 h4ff587b_0
openjpeg 2.4.0 h3ad879b_0
openssl 3.0.11 h7f8727e_2
packaging 23.2 pypi_0 pypi
pandas 2.1.1 pypi_0 pypi
pathtools 0.1.2 py_1 conda-forge
pillow 10.0.1 py39ha6cbd5a_0
pip 23.3 py39h06a4308_0
protobuf 4.23.4 pypi_0 pypi
psutil 5.9.6 pypi_0 pypi
pyarrow 13.0.0 pypi_0 pypi
pyasn1 0.5.0 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pyopenssl 23.2.0 py39h06a4308_0
pysocks 1.7.1 py39h06a4308_0
python 3.9.18 h955ad1f_0
python-dateutil 2.8.2 pypi_0 pypi
python_abi 3.9 2_cp39 conda-forge
pytorch 1.13.1 py3.9_cuda11.7_cudnn8.5.0_0 pytorch
pytorch-cuda 11.7 h778d358_5 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2023.3.post1 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
readline 8.2 h5eee18b_0
regex 2023.10.3 pypi_0 pypi
requests 2.31.0 py39h06a4308_0
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
safetensors 0.4.0 pypi_0 pypi
scipy 1.11.3 py39h5f9d8c6_0
sentry-sdk 1.32.0 pyhd8ed1ab_0 conda-forge
setproctitle 1.1.10 py39h3811e60_1004 conda-forge
setuptools 68.0.0 py39h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
smmap 5.0.0 pyhd8ed1ab_0 conda-forge
sqlite 3.41.2 h5eee18b_0
tbb 2021.8.0 hdb19cb5_0
tensorboard 2.15.0 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tokenizers 0.14.1 pypi_0 pypi
torchaudio 0.13.1 py39_cu117 pytorch
torchtriton 2.1.0 py39 pytorch
torchvision 0.14.1 py39_cu117 pytorch
tqdm 4.66.1 pypi_0 pypi
transformers 4.34.1 pypi_0 pypi
typing_extensions 4.7.1 py39h06a4308_0
tzdata 2023.3 pypi_0 pypi
urllib3 1.26.18 py39h06a4308_0
wandb 0.15.12 pyhd8ed1ab_0 conda-forge
wcwidth 0.2.8 pypi_0 pypi
werkzeug 3.0.1 pypi_0 pypi
wheel 0.41.2 py39h06a4308_0
xformers 0.0.22.post7 py39_cu11.7.1_pyt1.13.1 xformers
xxhash 3.4.1 pypi_0 pypi
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.9.2 pypi_0 pypi
zipp 3.17.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_0
zstd 1.5.5 hc292b87_0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10846405?v=4",
"events_url": "https://api.github.com/users/bhosalems/events{/privacy}",
"followers_url": "https://api.github.com/users/bhosalems/followers",
"following_url": "https://api.github.com/users/bhosalems/following{/other_user}",
"gists_url": "https://api.github.com/users/bhosalems/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhosalems",
"id": 10846405,
"login": "bhosalems",
"node_id": "MDQ6VXNlcjEwODQ2NDA1",
"organizations_url": "https://api.github.com/users/bhosalems/orgs",
"received_events_url": "https://api.github.com/users/bhosalems/received_events",
"repos_url": "https://api.github.com/users/bhosalems/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhosalems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhosalems/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhosalems",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6363/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6360/comments | https://api.github.com/repos/huggingface/datasets/issues/6360/events | https://github.com/huggingface/datasets/issues/6360 | 1,965,672,950 | I_kwDODunzps51Kcn2 | 6,360 | Add support for `Sequence(Audio/Image)` feature in `push_to_hub` | {
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Laurent2916",
"id": 21087104,
"login": "Laurent2916",
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Laurent2916",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] | null | [
"This issue stems from https://github.com/huggingface/datasets/blob/6d2f2a5e0fea3827eccfd1717d8021c15fc4292a/src/datasets/table.py#L2203-L2205\r\n\r\nI'll address it as part of https://github.com/huggingface/datasets/pull/6283.\r\n\r\nIn the meantime, this should work\r\n\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets import Image\r\n\r\ndataset = dataset.with_format(\"arrow\")\r\n\r\ndef embed_images(pa_table):\r\n images_arr = pa.chunked_array(\r\n [\r\n pa.ListArray.from_arrays(chunk.offsets, Image().embed_storage(chunk.values), mask=chunk.is_null())\r\n for chunk in pa_table[\"images\"].chunks\r\n ]\r\n )\r\n return pa_table.set_column(pa_table.schema.get_field_index(\"images\"), \"images\", images_arr)\r\n\r\ndataset = dataset.map(embed_images, batched=True)\r\n\r\ndataset = dataset.with_format(\"python\")\r\n\r\ndataset.push_to_hub(...)\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Feature request
Allow for `Sequence` of `Image` (or `Audio`) to be embedded inside the shards.
### Motivation
Currently, thanks to #3685, when `embed_external_files` is set to True (which is the default) in `push_to_hub`, features of type `Image` and `Audio` are embedded inside the arrow/parquet shards, instead of only storing paths to the files.
I've noticed that this behavior does not extend to `Sequence` of `Image`, when working with a [dataset of timelapse images](https://huggingface.co/datasets/1aurent/Human-Embryo-Timelapse).
### Your contribution
I'll submit a PR if I find a way to add this feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6360/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6360/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6359/comments | https://api.github.com/repos/huggingface/datasets/issues/6359/events | https://github.com/huggingface/datasets/issues/6359 | 1,965,378,583 | I_kwDODunzps51JUwX | 6,359 | Stuck in "Resolving data files..." | {
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Luciennnnnnn",
"id": 20135317,
"login": "Luciennnnnnn",
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs",
"received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events",
"repos_url": "https://api.github.com/users/Luciennnnnnn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Luciennnnnnn",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Most likely, the data file inference logic is the problem here.\r\n\r\nYou can run the following code to verify this:\r\n```python\r\nimport time\r\nfrom datasets.data_files import get_data_patterns\r\nstart_time = time.time()\r\nget_data_patterns(\"/path/to/img_dir\")\r\nend_time = time.time()\r\nprint(f\"Elapsed time: {end_time - start_time:.2f}s\")\r\n```\r\n \r\nWe plan to optimize this for the next version (or version after that). In the meantime, specifying the split patterns manually should give better performance:\r\n```python\r\nds = load_dataset(\"imagefolder\", data_files={\"train\": \"path/to/img_dir/train/**\", ...}, split=\"train\")\r\n```",
"Hi, @mariosasko, you are right; data file inference logic is extremely slow.\r\n\r\nI have done a similar test, that is I modify the source code of datasets/load.py to measure the cost of two suspicious operations:\r\n```python\r\ndef get_module(self) -> DatasetModule:\r\n base_path = Path(self.data_dir or \"\").expanduser().resolve().as_posix()\r\n start = time.time()\r\n patterns = sanitize_patterns(self.data_files) if self.data_files is not None else get_data_patterns(base_path)\r\n print(f\"patterns: {time.time() - start}\")\r\n start = time.time()\r\n data_files = DataFilesDict.from_patterns(\r\n patterns,\r\n download_config=self.download_config,\r\n base_path=base_path,\r\n )\r\n print(f\"data_files: {time.time() - start}\")\r\n```\r\nIt gaves:\r\npatterns: 3062.2050700187683\r\ndata_files: 413.9576675891876\r\n\r\nThus, these two operations contribute to almost all of load time. What's going on in them?",
"Furthermore, what's my current workaround about this problem? Should I save it by `save_to_disk()` and load dataset through `load_from_disk`?",
"were you able to solve this issue?, I am facing the same issue"
] | 1970-01-01T00:00:00.000001 | 1,706 | null | NONE | null | ### Describe the bug
I have an image dataset with 300k images, the size of image is 768 * 768.
When I run `dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')` in second time, it takes 50 minutes to finish "Resolving data files" part, what's going on in this part?
From my understand, after Arrow files been created in the first run, the second run should not take time longer than one or two minutes.
### Steps to reproduce the bug
# Run following code two times
dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')
### Expected behavior
Fast dataset building
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6359/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6359/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6358/comments | https://api.github.com/repos/huggingface/datasets/issues/6358/events | https://github.com/huggingface/datasets/issues/6358 | 1,965,014,595 | I_kwDODunzps51H75D | 6,358 | Mounting datasets cache fails due to absolute paths. | {
"avatar_url": "https://avatars.githubusercontent.com/u/72921588?v=4",
"events_url": "https://api.github.com/users/charliebudd/events{/privacy}",
"followers_url": "https://api.github.com/users/charliebudd/followers",
"following_url": "https://api.github.com/users/charliebudd/following{/other_user}",
"gists_url": "https://api.github.com/users/charliebudd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/charliebudd",
"id": 72921588,
"login": "charliebudd",
"node_id": "MDQ6VXNlcjcyOTIxNTg4",
"organizations_url": "https://api.github.com/users/charliebudd/orgs",
"received_events_url": "https://api.github.com/users/charliebudd/received_events",
"repos_url": "https://api.github.com/users/charliebudd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/charliebudd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charliebudd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/charliebudd",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"You may be able to make it work by tweaking some environment variables, such as [`HF_HOME`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#hfhome) or [`HF_DATASETS_CACHE`](https://huggingface.co/docs/datasets/cache#cache-directory).",
"> You may be able to make it work by tweaking some environment variables, such as [`HF_HOME`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#hfhome) or [`HF_DATASETS_CACHE`](https://huggingface.co/docs/datasets/cache#cache-directory).\r\n\r\nI am already doing this. The problem is that, while this seemingly allows flexibility, the absolute paths written into the cache still have the old cache directory. The paths written into the cache should be relative to the cache location to allow this sort of flexibility. Sorry, I omitted this in the reproduction steps, I have now added it.",
"I'm unable to reproduce this with the cache\r\n```bash\r\nexport HF_CACHE=$PWD/hf_cache\r\npython -c \"import datasets; datasets.load_dataset('imdb')\"\r\n```\r\nimported inside a dummy container that is built from\r\n```bash\r\nFROM python:3.9\r\n\r\nWORKDIR /usr/src/app\r\n\r\nRUN pip install datasets\r\n\r\nCOPY ./hf_cache ./hf_cache\r\n\r\nENV HF_HOME=./hf_cache\r\nENV HF_DATASETS_OFFLINE=1\r\n\r\nCMD [\"python\"]\r\n```\r\nWhat do you mean by \"absolute paths written into the cache\"? Paths inside the HF cache paths are based on hash (hashed URL of the downloaded files, etc.)",
"@mariosasko Same problem: the absolute paths written into the cache still have the old cache directory. Like:\r\n\r\n{'bytes': None, 'path': 'E:\\\\work-20240321\\\\datasets\\\\downloads\\\\extracted\\\\9752883596854dc57e01c74cc3f494b2ba63754dadd9e77f9d1932deddbd2273\\\\58f33a03-026f-4adc-b69f-b89d16b9f35a.webp'}\r\n\r\nWhen I move this cached directory to another directory, these datasets cannot be used casue path changes. So, the paths written into the cache should be relative to the cache location to allow this sort of flexibility. ",
"Sorry, the reply on this thread escaped my attention. The problem with @mariosasko's attempted reproduction is the absolute path `./hf_cache` is the same in the host system and the docker container, so naturally the paths would be correct. Modifying the docker image as below should reproduce the error...\r\n\r\n```\r\nFROM python:3.9\r\n\r\nWORKDIR /usr/src/app\r\n\r\nRUN pip install datasets\r\n\r\nCOPY ./hf_cache ./my_cache/\r\n\r\nENV HF_HOME=./my_cache/\r\nENV HF_DATASETS_OFFLINE=1\r\n\r\nCMD [\"python\"]\r\n```\r\n\r\nThe paths written inside the cache will still have `./hf_cache` prefixing all the paths. If they were relative paths (relative to the top level of the cache) this would be avoided."
] | 1970-01-01T00:00:00.000001 | 1,712 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Creating a datasets cache and mounting this into, for example, a docker container, renders the data unreadable due to absolute paths written into the cache.
### Steps to reproduce the bug
1. Create a datasets cache by downloading some data
2. Mount the dataset folder into a docker container or remote system.
3. (Edit) Set `HF_HOME` or `HF_DATASET_CACHE` to point to the mounted cache.
4. Attempt to access the data from within the docker container.
5. An error is thrown saying no file exists at \<absolute path to original cache location\>
### Expected behavior
The data is loaded without error
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6358/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6358/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6357/comments | https://api.github.com/repos/huggingface/datasets/issues/6357/events | https://github.com/huggingface/datasets/issues/6357 | 1,964,653,995 | I_kwDODunzps51Gj2r | 6,357 | Allow passing a multiprocessing context to functions that support `num_proc` | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,698 | null | CONTRIBUTOR | null | ### Feature request
Allow specifying [a multiprocessing context](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) to functions that support `num_proc` or use multiprocessing pools. For example, the following could be done:
```python
dataset = dataset.map(_func, num_proc=2, mp_context=multiprocess.get_context("spawn"))
```
Or at least the multiprocessing start method ("fork", "spawn", "fork_server" or `None`):
```python
dataset = dataset.map(_func, num_proc=2, mp_start_method="spawn")
```
Another option could be passing the `pool` as an argument.
### Motivation
By default, `multiprocess` (the `multiprocessing`-fork library that this repo uses) uses the "fork" start method for multiprocessing pools (for the default context). It could be changed by using `set_start_method`. However, this conditions the multiprocessing start method from all processing in a Python program that uses the default context, because [you can't call that function more than once](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods:~:text=set_start_method()%20should%20not%20be%20used%20more%20than%20once%20in%20the%20program.). My proposal is to allow using a different multiprocessing context, not to condition the whole Python program.
One reason to change the start method is that "fork" (the default) makes child processes likely deadlock if thread pools were created before (and also this is not supported by POSIX). For example, this happens when using PyTorch because OpenMP threads are used for CPU intra-op parallelism, which is enabled by default (e.g., for context see [`torch.set_num_threads`](https://pytorch.org/docs/stable/generated/torch.set_num_threads.html)). This can also be fixed by setting `torch.set_num_threads(1)` (or similarly by other methods) but this conditions the whole Python program as it can only be set once to guarantee its behavior (similarly to). There are noticeable performance differences when setting this number to 1 even when using GPU(s). Using, e.g., a "spawn" start method would solve this issue.
For more context, see:
* https://discuss.huggingface.co/t/dataset-map-stuck-with-torch-set-num-threads-set-to-2-or-larger/37984
* https://discuss.huggingface.co/t/using-num-proc-1-in-dataset-map-hangs/44310
### Your contribution
I'd be happy to review a PR that makes such a change. And if you really don't have the bandwidth for it, I'd consider creating one. | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6357/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6357/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6354/comments | https://api.github.com/repos/huggingface/datasets/issues/6354/events | https://github.com/huggingface/datasets/issues/6354 | 1,963,483,324 | I_kwDODunzps51CGC8 | 6,354 | `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` | {
"avatar_url": "https://avatars.githubusercontent.com/u/50199774?v=4",
"events_url": "https://api.github.com/users/NazyS/events{/privacy}",
"followers_url": "https://api.github.com/users/NazyS/followers",
"following_url": "https://api.github.com/users/NazyS/following{/other_user}",
"gists_url": "https://api.github.com/users/NazyS/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NazyS",
"id": 50199774,
"login": "NazyS",
"node_id": "MDQ6VXNlcjUwMTk5Nzc0",
"organizations_url": "https://api.github.com/users/NazyS/orgs",
"received_events_url": "https://api.github.com/users/NazyS/received_events",
"repos_url": "https://api.github.com/users/NazyS/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NazyS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NazyS/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NazyS",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I am having issues as well with this. \r\n\r\nHowever, the error I am getting is :\r\n`RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.`\r\n\r\nAlso did not work with pyspark==3.3.0 and py4j==0.10.9.5"
] | 1970-01-01T00:00:00.000001 | 1,699 | null | NONE | null | ### Describe the bug
Looks like `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` if I'm not missing anything.
Also, returns not consistent error messages, which probably depend on the nondeterministic order of worker executions
Some exampes I've encountered:
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 79, in __iter__
yield from self.generate_examples_fn()
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 49, in generate_fn
df_with_partition_id = df.select("*", pyspark.sql.functions.spark_partition_id().alias("part_id"))
File "/databricks/spark/python/pyspark/instrumentation_utils.py", line 54, in wrapper
logger.log_failure(
File "/databricks/spark/python/pyspark/databricks/usage_logger.py", line 70, in log_failure
self.logger.recordFunctionCallFailureEvent(
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__
return_value = get_return_value(
File "/databricks/spark/python/pyspark/errors/exceptions/captured.py", line 188, in deco
return f(*a, **kw)
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 342, in get_return_value
return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
KeyError: 'c'
```
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 79, in __iter__
yield from self.generate_examples_fn()
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 49, in generate_fn
df_with_partition_id = df.select("*", pyspark.sql.functions.spark_partition_id().alias("part_id"))
File "/databricks/spark/python/pyspark/sql/utils.py", line 162, in wrapped
return f(*args, **kwargs)
File "/databricks/spark/python/pyspark/sql/functions.py", line 4893, in spark_partition_id
return _invoke_function("spark_partition_id")
File "/databricks/spark/python/pyspark/sql/functions.py", line 98, in _invoke_function
return Column(jf(*args))
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__
return_value = get_return_value(
File "/databricks/spark/python/pyspark/errors/exceptions/captured.py", line 188, in deco
return f(*a, **kw)
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 342, in get_return_value
return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
KeyError: 'm'
```
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 79, in __iter__
yield from self.generate_examples_fn()
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 49, in generate_fn
df_with_partition_id = df.select("*", pyspark.sql.functions.spark_partition_id().alias("part_id"))
File "/databricks/spark/python/pyspark/sql/utils.py", line 162, in wrapped
return f(*args, **kwargs)
File "/databricks/spark/python/pyspark/sql/functions.py", line 4893, in spark_partition_id
return _invoke_function("spark_partition_id")
File "/databricks/spark/python/pyspark/sql/functions.py", line 97, in _invoke_function
jf = _get_jvm_function(name, SparkContext._active_spark_context)
File "/databricks/spark/python/pyspark/sql/functions.py", line 88, in _get_jvm_function
return getattr(sc._jvm.functions, name)
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1725, in __getattr__
raise Py4JError(message)
py4j.protocol.Py4JError: functions does not exist in the JVM
```
### Steps to reproduce the bug
```python
import pandas as pd
import numpy as np
batch_size = 16
pdf = pd.DataFrame({
key: np.random.rand(16*100) for key in ['feature', 'target']
})
test_df = spark.createDataFrame(pdf)
from datasets import IterableDataset
from torch.utils.data import DataLoader
ids = IterableDataset.from_spark(test_df)
for batch in DataLoader(ids, batch_size=16, num_workers=4):
for k, b in batch.items():
print(k, b.shape, sep='\t')
print('\n')
```
### Expected behavior
For `num_workers` equal to 0 or 1 works fine as expected:
```
feature torch.Size([16])
target torch.Size([16])
feature torch.Size([16])
target torch.Size([16])
....
```
Expected to support workers >1.
### Environment info
Databricks 13.3 LTS ML runtime - Spark 3.4.1
pyspark==3.4.1
py4j==0.10.9.7
datasets==2.13.1 and also tested with datasets==2.14.6 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6354/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6354/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6353/comments | https://api.github.com/repos/huggingface/datasets/issues/6353/events | https://github.com/huggingface/datasets/issues/6353 | 1,962,646,450 | I_kwDODunzps50-5uy | 6,353 | load_dataset save_to_disk load_from_disk error | {
"avatar_url": "https://avatars.githubusercontent.com/u/13804492?v=4",
"events_url": "https://api.github.com/users/brisker/events{/privacy}",
"followers_url": "https://api.github.com/users/brisker/followers",
"following_url": "https://api.github.com/users/brisker/following{/other_user}",
"gists_url": "https://api.github.com/users/brisker/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brisker",
"id": 13804492,
"login": "brisker",
"node_id": "MDQ6VXNlcjEzODA0NDky",
"organizations_url": "https://api.github.com/users/brisker/orgs",
"received_events_url": "https://api.github.com/users/brisker/received_events",
"repos_url": "https://api.github.com/users/brisker/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brisker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brisker/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brisker",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"solved.\r\nfsspec version problem",
"I'm using the latest datasets and fsspec , but still got this error!\r\n\r\ndatasets : Version: 2.13.0\r\n\r\nfsspec Version: 2023.10.0\r\n\r\n```\r\nFile \"/home/guoby/app/Anaconda3-2021.05/envs/news/lib/python3.8/site-packages/datasets/load.py\", line 1892, in load_from_disk\r\n return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)\r\n File \"/home/guoby/app/Anaconda3-2021.05/envs/news/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1371, in load_from_disk\r\n dataset_dict[k] = Dataset.load_from_disk(\r\n File \"/home/guoby/app/Anaconda3-2021.05/envs/news/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1639, in load_from_disk\r\n fs_token_paths = fsspec.get_fs_token_paths(dataset_path, storage_options=storage_options)\r\n File \"/home/guoby/app/Anaconda3-2021.05/envs/news/lib/python3.8/site-packages/fsspec/core.py\", line 610, in get_fs_token_paths\r\n chain = _un_chain(urlpath0, storage_options or {})\r\n File \"/home/guoby/app/Anaconda3-2021.05/envs/news/lib/python3.8/site-packages/fsspec/core.py\", line 325, in _un_chain\r\n cls = get_filesystem_class(protocol)\r\n File \"/home/guoby/app/Anaconda3-2021.05/envs/news/lib/python3.8/site-packages/fsspec/registry.py\", line 232, in get_filesystem_class\r\n raise ValueError(f\"Protocol not known: {protocol}\")\r\n```",
"These two versions work.\r\n<img width=\"807\" alt=\"截圖 2023-11-22 下午5 55 28\" src=\"https://github.com/huggingface/datasets/assets/77866896/faa8333f-0519-4d69-b243-a8880cd7fc1f\">\r\n",
"datasets==2.10.1 and fsspec==2023.6.0 also works for me.",
"确实"
] | 1970-01-01T00:00:00.000001 | 1,712 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
datasets version: 2.10.1
I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki`
into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird happens:
```
load_from_disk('/LLM/data/wiki')
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1874, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/dataset_dict.py", line 1309, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1543, in load_from_disk
fs_token_paths = fsspec.get_fs_token_paths(dataset_path, storage_options=storage_options)
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 610, in get_fs_token_paths
chain = _un_chain(urlpath0, storage_options or {})
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 325, in _un_chain
cls = get_filesystem_class(protocol)
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/registry.py", line 232, in get_filesystem_class
raise ValueError(f"Protocol not known: {protocol}")
ValueError: Protocol not known: /LLM/data/wiki
```
It seems that something went wrong on the arrow file?
How can I solve this , since currently I can not save_to_disk on ubuntu system
### Steps to reproduce the bug
datasets version: 2.10.1
### Expected behavior
datasets version: 2.10.1
### Environment info
datasets version: 2.10.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/13804492?v=4",
"events_url": "https://api.github.com/users/brisker/events{/privacy}",
"followers_url": "https://api.github.com/users/brisker/followers",
"following_url": "https://api.github.com/users/brisker/following{/other_user}",
"gists_url": "https://api.github.com/users/brisker/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brisker",
"id": 13804492,
"login": "brisker",
"node_id": "MDQ6VXNlcjEzODA0NDky",
"organizations_url": "https://api.github.com/users/brisker/orgs",
"received_events_url": "https://api.github.com/users/brisker/received_events",
"repos_url": "https://api.github.com/users/brisker/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brisker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brisker/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brisker",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6353/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6353/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6352/comments | https://api.github.com/repos/huggingface/datasets/issues/6352/events | https://github.com/huggingface/datasets/issues/6352 | 1,962,296,057 | I_kwDODunzps509kL5 | 6,352 | Error loading wikitext data raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") | {
"avatar_url": "https://avatars.githubusercontent.com/u/68569076?v=4",
"events_url": "https://api.github.com/users/Ahmed-Roushdy/events{/privacy}",
"followers_url": "https://api.github.com/users/Ahmed-Roushdy/followers",
"following_url": "https://api.github.com/users/Ahmed-Roushdy/following{/other_user}",
"gists_url": "https://api.github.com/users/Ahmed-Roushdy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ahmed-Roushdy",
"id": 68569076,
"login": "Ahmed-Roushdy",
"node_id": "MDQ6VXNlcjY4NTY5MDc2",
"organizations_url": "https://api.github.com/users/Ahmed-Roushdy/orgs",
"received_events_url": "https://api.github.com/users/Ahmed-Roushdy/received_events",
"repos_url": "https://api.github.com/users/Ahmed-Roushdy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ahmed-Roushdy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ahmed-Roushdy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ahmed-Roushdy",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"+1 \r\n```\r\nFound cached dataset csv (file:///home/ubuntu/.cache/huggingface/datasets/theSquarePond___csv/theSquarePond--XXXXX-bbf0a8365d693d2c/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d)\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\nCell In[14], line 4\r\n 1 get_ipython().system('pip install -U datasets')\r\n 3 # Load dataset from the hub\r\n----> 4 dataset = load_dataset(dataset_name)\r\n\r\nFile ~/anaconda3/envs/python38-env/lib/python3.8/site-packages/datasets/load.py:1810, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1806 # Build dataset for splits\r\n 1807 keep_in_memory = (\r\n 1808 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1809 )\r\n-> 1810 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)\r\n 1811 # Rename and cast features to match task schema\r\n 1812 if task is not None:\r\n\r\nFile ~/anaconda3/envs/python38-env/lib/python3.8/site-packages/datasets/builder.py:1128, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)\r\n 1126 is_local = not is_remote_filesystem(self._fs)\r\n 1127 if not is_local:\r\n-> 1128 raise NotImplementedError(f\"Loading a dataset cached in a {type(self._fs).__name__} is not supported.\")\r\n 1129 if not os.path.exists(self._output_dir):\r\n 1130 raise FileNotFoundError(\r\n 1131 f\"Dataset {self.name}: could not find data in {self._output_dir}. Please make sure to call \"\r\n 1132 \"builder.download_and_prepare(), or use \"\r\n 1133 \"datasets.load_dataset() before trying to access the Dataset object.\"\r\n 1134 )\r\n\r\nNotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.\r\n```",
"+1\r\n\r\n```\r\nFound cached dataset csv ([file://C:/Users/Shady/.cache/huggingface/datasets/knkarthick___csv/knkarthick--dialogsum-cd36827d3490488d/0.0.0/6954658bab30a358235fa864b05cf819af0e179325c740e4bc853bcc7ec513e1](file:///C:/Users/Shady/.cache/huggingface/datasets/knkarthick___csv/knkarthick--dialogsum-cd36827d3490488d/0.0.0/6954658bab30a358235fa864b05cf819af0e179325c740e4bc853bcc7ec513e1))\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\nCell In[38], line 3\r\n 1 huggingface_dataset_name = \"knkarthick/dialogsum\"\r\n----> 3 dataset = load_dataset(huggingface_dataset_name)\r\n\r\nFile D:\\Desktop\\Workspace\\GenAI\\genai\\lib\\site-packages\\datasets\\load.py:1804, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1800 # Build dataset for splits\r\n 1801 keep_in_memory = (\r\n 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1803 )\r\n-> 1804 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)\r\n 1805 # Rename and cast features to match task schema\r\n 1806 if task is not None:\r\n\r\nFile D:\\Desktop\\Workspace\\GenAI\\genai\\lib\\site-packages\\datasets\\builder.py:1108, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)\r\n 1106 is_local = not is_remote_filesystem(self._fs)\r\n 1107 if not is_local:\r\n-> 1108 raise NotImplementedError(f\"Loading a dataset cached in a {type(self._fs).__name__} is not supported.\")\r\n 1109 if not os.path.exists(self._output_dir):\r\n 1110 raise FileNotFoundError(\r\n 1111 f\"Dataset {self.name}: could not find data in {self._output_dir}. Please make sure to call \"\r\n 1112 \"builder.download_and_prepare(), or use \"\r\n 1113 \"datasets.load_dataset() before trying to access the Dataset object.\"\r\n 1114 )\r\n\r\nNotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.\r\n```",
"This error stems from a breaking change in `fsspec`. It has been fixed in the latest `datasets` release (`2.14.6`). Updating the installation with `pip install -U datasets` should fix the issue.\r\n",
"> 此错误源于 中的重大更改。此问题已在最新版本 () 中修复。更新安装应该可以解决此问题。`fsspec``datasets``2.14.6``pip install -U datasets`\r\n\r\nthanks , 太好啦,刚好解决了我的问题,GPT都没解决了,终于被你搞定了",
"https://stackoverflow.com/questions/77433096/notimplementederror-loading-a-dataset-cached-in-a-localfilesystem-is-not-suppor/77433141#77433141",
"Fixed by:\r\n- https://github.com/huggingface/datasets/pull/6334\r\n\r\nThe fix was released in `datasets-2.14.6`.",
"this is fixed in 2.15.0, but broken again in 2.17.0. Can someone verify?",
"I'm on `2.17.1` and can confirm it's broken again. Downgrading to `2.16` helped.",
"> 2.14.6\r\n\r\ni update the version but the error still exist \r\n",
"The issue seems to persist in 2.18.0",
"same problem in 2.18.0",
"Which version of `fsspec` and OS are you using ?",
"> Which version of `fsspec` and OS are you using ?\r\n\r\n`fsspec-2023.10.0` and Windows 10, guess fsspec version too old..."
] | 1970-01-01T00:00:00.000001 | 1,710 | 1970-01-01T00:00:00.000001 | NONE | null | I was trying to load the wiki dataset, but i got this error
traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train')
File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/load.py", line 1804, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/builder.py", line 1108, in as_dataset
raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6352/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6352/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6350/comments | https://api.github.com/repos/huggingface/datasets/issues/6350/events | https://github.com/huggingface/datasets/issues/6350 | 1,961,869,203 | I_kwDODunzps5077-T | 6,350 | Different objects are returned from calls that should be returning the same kind of object. | {
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/phalexo",
"id": 4603365,
"login": "phalexo",
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"repos_url": "https://api.github.com/users/phalexo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/phalexo",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"`load_dataset` returns a `DatasetDict` object unless `split` is defined, in which case it returns a `Dataset` (or a list of datasets if `split` is a list). We've discussed dropping `DatasetDict` from the API in https://github.com/huggingface/datasets/issues/5189 to always return the same type in `load_dataset` and support datasets without (explicit) splits. IIRC the main discussion point is deciding what to return when loading a dataset with multiple splits, but `split` is not specified. What would you expect as a return value in that scenario?",
"> `load_dataset` returns a `DatasetDict` object unless `split` is defined, in which case it returns a `Dataset` (or a list of datasets if `split` is a list). We've discussed dropping `DatasetDict` from the API in #5189 to always return the same type in `load_dataset` and support datasets without (explicit) splits. IIRC the main discussion point is deciding what to return when loading a dataset with multiple splits, but `split` is not specified. What would you expect as a return value in that scenario?\r\n\r\nWouldn't a dataset with multiple splits already have keys and their related data arrays?\r\n\r\nLets say the dataset has \"train\" : trainset, \"valid\": validset and \"test\": testset\r\n\r\nSo a dictionary can be returned,, i.e.\r\n\r\n{ \r\n\"train\": trainset,\r\n\"valid\": validset,\r\n\"test\": testset\r\n}\r\n\r\nif a split is provided split=['train[:80%]', 'valid[80%:90%]', 'test[90%:100%]']\r\n\r\nwould also return the same dictionary as above.\r\n\r\nsplit='train[:10%]' should return the same value as split=['train[:10%]']\r\n\r\n{\r\n\"train\": trainset\r\n}\r\n "
] | 1970-01-01T00:00:00.000001 | 1,698 | null | NONE | null | ### Describe the bug
1. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir, split='train[:1%]')
2. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir)
The only difference I would expect these calls to have is the size of the dataset.
But, while 2. returns a dictionary with "train" key in it, 1. returns a dataset WITHOUT any initial "train" keyword.
Both calls are to be used within exactly the same context. They should return identically structured datasets of different size.
### Steps to reproduce the bug
See above.
### Expected behavior
Expect both calls to return the same structured Dataset structure but with different number of elements, i.e. call 1. should have 1% of the data of the call 2.0
### Environment info
Ubuntu 20.04
gcc 9.x.x.
It is really irrelevant. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6350/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6350/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6349/comments | https://api.github.com/repos/huggingface/datasets/issues/6349/events | https://github.com/huggingface/datasets/issues/6349 | 1,961,435,673 | I_kwDODunzps506SIZ | 6,349 | Can't load ds = load_dataset("imdb") | {
"avatar_url": "https://avatars.githubusercontent.com/u/86415736?v=4",
"events_url": "https://api.github.com/users/vivianc2/events{/privacy}",
"followers_url": "https://api.github.com/users/vivianc2/followers",
"following_url": "https://api.github.com/users/vivianc2/following{/other_user}",
"gists_url": "https://api.github.com/users/vivianc2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vivianc2",
"id": 86415736,
"login": "vivianc2",
"node_id": "MDQ6VXNlcjg2NDE1NzM2",
"organizations_url": "https://api.github.com/users/vivianc2/orgs",
"received_events_url": "https://api.github.com/users/vivianc2/received_events",
"repos_url": "https://api.github.com/users/vivianc2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vivianc2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vivianc2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vivianc2",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I'm unable to reproduce this error. The server hosting the files may have been down temporarily, so try again.",
"getting the same error",
"I am getting the following error:\r\nEnv: Python3.10\r\ndatasets: 2.10.1\r\nLinux: Amazon Linux2\r\n\r\n`Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/datasets/load.py\", line 1218, in dataset_module_factory\r\n raise e1 from None\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/datasets/load.py\", line 1202, in dataset_module_factory\r\n ).get_module()\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/datasets/load.py\", line 767, in get_module\r\n else get_data_patterns_in_dataset_repository(hfh_dataset_info, self.data_dir)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/datasets/data_files.py\", line 675, in get_data_patterns_in_dataset_repository\r\n return _get_data_files_patterns(resolver)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/datasets/data_files.py\", line 236, in _get_data_files_patterns\r\n data_files = pattern_resolver(pattern)\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/datasets/data_files.py\", line 486, in _resolve_single_pattern_in_dataset_repository\r\n glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)]\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/fsspec/spec.py\", line 606, in glob\r\n pattern = glob_translate(path + (\"/\" if ends_with_sep else \"\"))\r\n File \"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/fsspec/utils.py\", line 734, in glob_translate\r\n raise ValueError(\r\nValueError: Invalid pattern: '**' can only be an entire path component`",
"Resolved by upgrading datasets version to 2.18.0"
] | 1970-01-01T00:00:00.000001 | 1,710 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I did `from datasets import load_dataset, load_metric` and then `ds = load_dataset("imdb")` and it gave me the error:
ExpectedMoreDownloadedFiles: {'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'}
I tried doing `ds = load_dataset("imdb",download_mode="force_redownload")` as well as reinstalling dataset. I still face this problem.
### Steps to reproduce the bug
1. from datasets import load_dataset, load_metric
2. ds = load_dataset("imdb")
### Expected behavior
It should load and give me this when I run `ds`
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 25000
})
test: Dataset({
features: ['text', 'label'],
num_rows: 25000
})
unsupervised: Dataset({
features: ['text', 'label'],
num_rows: 50000
})
})
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.4.0-164-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.16.2
- PyArrow version: 13.0.0
- Pandas version: 2.0.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6349/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6349/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6348/comments | https://api.github.com/repos/huggingface/datasets/issues/6348/events | https://github.com/huggingface/datasets/issues/6348 | 1,961,268,504 | I_kwDODunzps505pUY | 6,348 | Parquet stream-conversion fails to embed images/audio files from gated repos | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,698 | null | COLLABORATOR | null | it seems to be an issue with datasets not passing the token to embed_table_storage when generating a dataset
See https://github.com/huggingface/datasets-server/issues/2010 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6348/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6348/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6347/comments | https://api.github.com/repos/huggingface/datasets/issues/6347/events | https://github.com/huggingface/datasets/issues/6347 | 1,959,004,835 | I_kwDODunzps50xAqj | 6,347 | Incorrect example code in 'Create a dataset' docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/72076688?v=4",
"events_url": "https://api.github.com/users/rwood-97/events{/privacy}",
"followers_url": "https://api.github.com/users/rwood-97/followers",
"following_url": "https://api.github.com/users/rwood-97/following{/other_user}",
"gists_url": "https://api.github.com/users/rwood-97/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwood-97",
"id": 72076688,
"login": "rwood-97",
"node_id": "MDQ6VXNlcjcyMDc2Njg4",
"organizations_url": "https://api.github.com/users/rwood-97/orgs",
"received_events_url": "https://api.github.com/users/rwood-97/received_events",
"repos_url": "https://api.github.com/users/rwood-97/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwood-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwood-97/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwood-97",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This was fixed in https://github.com/huggingface/datasets/pull/6247. You can find the fix in the `main` version of the docs",
"Ah great, thanks :)"
] | 1970-01-01T00:00:00.000001 | 1,698 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
On [this](https://huggingface.co/docs/datasets/create_dataset) page, the example code for loading in images and audio is incorrect.
Currently, examples are:
``` python
from datasets import ImageFolder
dataset = load_dataset("imagefolder", data_dir="/path/to/pokemon")
```
and
``` python
from datasets import AudioFolder
dataset = load_dataset("audiofolder", data_dir="/path/to/folder")
```
I'm pretty sure the imports are wrong and should be:
``` python
from datasets import load_dataset
dataset = load_dataset("audiofolder", data_dir="/path/to/folder")
```
I am happy to update this if this is right but just wanted to check before making any changes.
### Steps to reproduce the bug
Go to https://huggingface.co/docs/datasets/create_dataset
### Expected behavior
N/A
### Environment info
N/A | {
"avatar_url": "https://avatars.githubusercontent.com/u/72076688?v=4",
"events_url": "https://api.github.com/users/rwood-97/events{/privacy}",
"followers_url": "https://api.github.com/users/rwood-97/followers",
"following_url": "https://api.github.com/users/rwood-97/following{/other_user}",
"gists_url": "https://api.github.com/users/rwood-97/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwood-97",
"id": 72076688,
"login": "rwood-97",
"node_id": "MDQ6VXNlcjcyMDc2Njg4",
"organizations_url": "https://api.github.com/users/rwood-97/orgs",
"received_events_url": "https://api.github.com/users/rwood-97/received_events",
"repos_url": "https://api.github.com/users/rwood-97/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwood-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwood-97/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwood-97",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6347/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6347/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6345/comments | https://api.github.com/repos/huggingface/datasets/issues/6345/events | https://github.com/huggingface/datasets/issues/6345 | 1,957,707,870 | I_kwDODunzps50sEBe | 6,345 | support squad structure datasets using a YAML parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/138524319?v=4",
"events_url": "https://api.github.com/users/MajdTannous1/events{/privacy}",
"followers_url": "https://api.github.com/users/MajdTannous1/followers",
"following_url": "https://api.github.com/users/MajdTannous1/following{/other_user}",
"gists_url": "https://api.github.com/users/MajdTannous1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MajdTannous1",
"id": 138524319,
"login": "MajdTannous1",
"node_id": "U_kgDOCEG2nw",
"organizations_url": "https://api.github.com/users/MajdTannous1/orgs",
"received_events_url": "https://api.github.com/users/MajdTannous1/received_events",
"repos_url": "https://api.github.com/users/MajdTannous1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MajdTannous1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MajdTannous1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MajdTannous1",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,698 | null | NONE | null | ### Feature request
Since the squad structure is widely used, I think it could be beneficial to support it using a YAML parameter.
could you implement automatic data loading of squad-like data using squad JSON format, to read it from JSON files and view it in the correct squad structure.
The dataset structure should be like this:
https://huggingface.co/datasets/squad
Columns:id,title,context,question,answers
### Motivation
Dataset repo requires arbitrary Python code execution
### Your contribution
The dataset structure should be like this:
https://huggingface.co/datasets/squad
Columns:id,title,context,question,answers
train and dev sets in squad structure JSON files | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6345/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6345/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6333/comments | https://api.github.com/repos/huggingface/datasets/issues/6333/events | https://github.com/huggingface/datasets/issues/6333 | 1,956,714,423 | I_kwDODunzps50oRe3 | 6,333 | Support fsspec 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Hi @albertvillanova @lhoestq \r\n\r\nI believe the pull request that pins the fsspec version (https://github.com/huggingface/datasets/pull/6331) was merged by mistake. Another fix for the issue was merged on the same day an hour apart. See https://github.com/huggingface/datasets/pull/6334\r\n\r\nI'm now having an issue in my project where I can't use newer versions of fsspec.\r\n\r\nCan we remove the pin?\r\n\r\nHave a nice day! :)",
"Hi @tomscholz,\r\n\r\nThanks for pointing this out. I think you are right.\r\n\r\nI am doing some cross-checks and fixing it. ",
"Hi again, @tomscholz.\r\n\r\nAfter a more cautious investigation, I think the pin is OK because there are other reasons for it. Chronologically:\r\n- #6331 \r\n- #6334\r\n- #6336 \r\n- #6337 \r\n\r\nThe reason is that after version 2023.10.0, they changed again the behavior of their `glob` function. See: https://github.com/huggingface/datasets/pull/6337#issuecomment-1774930135\r\nWe are working on our side to support both previous and new glob behavior.\r\n\r\nNote:\r\n- First pin was < 2023.10.0\r\n- Last pin is <= 2023.10.0",
"Fixed by #6334 and #6336."
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | MEMBER | null | Once root issue is fixed, remove temporary pin of fsspec < 2023.10.0 introduced by:
- #6331
Related to issue:
- #6330
As @ZachNagengast suggested, the issue might be related to:
- https://github.com/fsspec/filesystem_spec/pull/1381 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6333/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6333/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6330/comments | https://api.github.com/repos/huggingface/datasets/issues/6330/events | https://github.com/huggingface/datasets/issues/6330 | 1,956,053,294 | I_kwDODunzps50lwEu | 6,330 | Latest fsspec==2023.10.0 issue with streaming datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZachNagengast",
"id": 1981179,
"login": "ZachNagengast",
"node_id": "MDQ6VXNlcjE5ODExNzk=",
"organizations_url": "https://api.github.com/users/ZachNagengast/orgs",
"received_events_url": "https://api.github.com/users/ZachNagengast/received_events",
"repos_url": "https://api.github.com/users/ZachNagengast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZachNagengast",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"I also encountered a similar error below.\r\nAppreciate the team could shed some light on this issue.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n[/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb](https://vscode-remote+ssh-002dremote-002braspberry-002dg5-002e4x.vscode-resource.vscode-cdn.net/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb) Cell 1 line 4\r\n [1](vscode-notebook-cell://ssh-remote%2Braspberry-g5.4x/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0) from datasets import load_dataset, load_dataset\r\n [3](vscode-notebook-cell://ssh-remote%2Braspberry-g5.4x/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2) # ds = load_dataset(\"parquet\", data_dir=\"/home/ubuntu/work/EveryDream2trainer/datasets/monse_v1/data\")\r\n----> [4](vscode-notebook-cell://ssh-remote%2Braspberry-g5.4x/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3) ds = load_dataset(\"Raspberry-ai/monse-v1\")\r\n\r\nFile [/opt/conda/envs/everydream/lib/python3.10/site-packages/datasets/load.py:1804](https://vscode-remote+ssh-002dremote-002braspberry-002dg5-002e4x.vscode-resource.vscode-cdn.net/opt/conda/envs/everydream/lib/python3.10/site-packages/datasets/load.py:1804), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1800 # Build dataset for splits\r\n 1801 keep_in_memory = (\r\n 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1803 )\r\n-> 1804 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)\r\n 1805 # Rename and cast features to match task schema\r\n 1806 if task is not None:\r\n\r\nFile [/opt/conda/envs/everydream/lib/python3.10/site-packages/datasets/builder.py:1108](https://vscode-remote+ssh-002dremote-002braspberry-002dg5-002e4x.vscode-resource.vscode-cdn.net/opt/conda/envs/everydream/lib/python3.10/site-packages/datasets/builder.py:1108), in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)\r\n 1106 is_local = not is_remote_filesystem(self._fs)\r\n 1107 if not is_local:\r\n-> 1108 raise NotImplementedError(f\"Loading a dataset cached in a {type(self._fs).__name__} is not supported.\")\r\n 1109 if not os.path.exists(self._output_dir):\r\n 1110 raise FileNotFoundError(\r\n 1111 f\"Dataset {self.name}: could not find data in {self._output_dir}. Please make sure to call \"\r\n 1112 \"builder.download_and_prepare(), or use \"\r\n 1113 \"datasets.load_dataset() before trying to access the Dataset object.\"\r\n 1114 )\r\n\r\nNotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.\r\n```\r\n\r\nCode to reproduce the issue:\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"Raspberry-ai/monse-v1\")\r\n```\r\n\r\n\r\nDependencies:\r\n```\r\nPackage Version\r\n------------------------- ------------\r\nabsl-py 2.0.0\r\naccelerate 0.23.0\r\naiohttp 3.8.4\r\naiosignal 1.3.1\r\nantlr4-python3-runtime 4.9.3\r\nanyio 4.0.0\r\nappdirs 1.4.4\r\nargon2-cffi 23.1.0\r\nargon2-cffi-bindings 21.2.0\r\narrow 1.3.0\r\nasttokens 2.4.0\r\nasync-lru 2.0.4\r\nasync-timeout 4.0.3\r\nattrs 23.1.0\r\nBabel 2.13.0\r\nbackcall 0.2.0\r\nbeautifulsoup4 4.12.2\r\nbitsandbytes 0.41.1\r\nbleach 6.1.0\r\nbraceexpand 0.1.7\r\ncachetools 5.3.1\r\ncertifi 2023.7.22\r\ncffi 1.16.0\r\ncharset-normalizer 3.3.1\r\nclick 8.1.7\r\ncmake 3.27.7\r\ncolorama 0.4.6\r\ncomm 0.1.4\r\ncompel 1.1.6\r\ndatasets 2.11.0\r\ndebugpy 1.8.0\r\ndecorator 5.1.1\r\ndefusedxml 0.7.1\r\ndiffusers 0.18.0\r\ndill 0.3.6\r\ndocker-pycreds 0.4.0\r\ndowg 0.3.1\r\neinops 0.7.0\r\neinops-exts 0.0.4\r\nexceptiongroup 1.1.3\r\nexecuting 2.0.0\r\nfastjsonschema 2.18.1\r\nfilelock 3.12.4\r\nfqdn 1.5.1\r\nfrozenlist 1.4.0\r\nfsspec 2023.10.0\r\nftfy 6.1.1\r\ngitdb 4.0.11\r\nGitPython 3.1.40\r\ngoogle-auth 2.23.3\r\ngoogle-auth-oauthlib 1.1.0\r\ngrpcio 1.59.0\r\nhuggingface-hub 0.18.0\r\nidna 3.4\r\nimportlib-metadata 6.8.0\r\ninflection 0.5.1\r\nipykernel 6.25.2\r\nipython 8.16.1\r\nisoduration 20.11.0\r\njedi 0.19.1\r\nJinja2 3.1.2\r\njoblib 1.3.2\r\njson5 0.9.14\r\njsonpointer 2.4\r\njsonschema 4.19.1\r\njsonschema-specifications 2023.7.1\r\njupyter_client 8.4.0\r\njupyter_core 5.4.0\r\njupyter-events 0.8.0\r\njupyter-lsp 2.2.0\r\njupyter_server 2.8.0\r\njupyter_server_terminals 0.4.4\r\njupyterlab 4.0.7\r\njupyterlab-pygments 0.2.2\r\njupyterlab_server 2.25.0\r\nlightning-utilities 0.9.0\r\nlion-pytorch 0.1.2\r\nlit 17.0.3\r\nMarkdown 3.5\r\nMarkupSafe 2.1.3\r\nmatplotlib-inline 0.1.6\r\nmistune 3.0.2\r\nmore-itertools 10.1.0\r\nmpmath 1.3.0\r\nmultidict 6.0.4\r\nmultiprocess 0.70.14\r\nmypy-extensions 1.0.0\r\nnbclient 0.8.0\r\nnbconvert 7.9.2\r\nnbformat 5.9.2\r\nnest-asyncio 1.5.8\r\nnetworkx 3.2\r\nnltk 3.8.1\r\nnotebook_shim 0.2.3\r\nnumpy 1.23.5\r\noauthlib 3.2.2\r\nomegaconf 2.2.3\r\nopen-clip-torch 2.22.0\r\nopen-flamingo 2.0.0\r\noverrides 7.4.0\r\npackaging 23.2\r\npandas 2.1.1\r\npandocfilters 1.5.0\r\nparso 0.8.3\r\npathtools 0.1.2\r\npexpect 4.8.0\r\npickleshare 0.7.5\r\nPillow 10.1.0\r\npip 23.3.1\r\nplatformdirs 3.11.0\r\nprometheus-client 0.17.1\r\nprompt-toolkit 3.0.39\r\nprotobuf 3.20.1\r\npsutil 5.9.6\r\nptyprocess 0.7.0\r\npure-eval 0.2.2\r\npyarrow 13.0.0\r\npyasn1 0.5.0\r\npyasn1-modules 0.3.0\r\npycparser 2.21\r\npyDeprecate 0.3.2\r\nPygments 2.16.1\r\npynvml 11.4.1\r\npyparsing 3.1.1\r\npyre-extensions 0.0.29\r\npython-dateutil 2.8.2\r\npython-json-logger 2.0.7\r\npytorch-lightning 1.6.5\r\npytz 2023.3.post1\r\nPyYAML 6.0.1\r\npyzmq 25.1.1\r\nreferencing 0.30.2\r\nregex 2023.10.3\r\nrequests 2.31.0\r\nrequests-oauthlib 1.3.1\r\nresponses 0.18.0\r\nrfc3339-validator 0.1.4\r\nrfc3986-validator 0.1.1\r\nrpds-py 0.10.6\r\nrsa 4.9\r\nsafetensors 0.4.0\r\nscipy 1.11.3\r\nSend2Trash 1.8.2\r\nsentencepiece 0.1.98\r\nsentry-sdk 1.32.0\r\nsetproctitle 1.3.3\r\nsetuptools 68.2.2\r\nsix 1.16.0\r\nsmmap 5.0.1\r\nsniffio 1.3.0\r\nsoupsieve 2.5\r\nstack-data 0.6.3\r\nsympy 1.12\r\ntensorboard 2.15.0\r\ntensorboard-data-server 0.7.1\r\nterminado 0.17.1\r\ntimm 0.9.8\r\ntinycss2 1.2.1\r\ntokenizers 0.13.3\r\ntomli 2.0.1\r\ntorch 2.0.1+cu118\r\ntorchmetrics 1.2.0\r\ntorchvision 0.15.2+cu118\r\ntornado 6.3.3\r\ntqdm 4.66.1\r\ntraitlets 5.11.2\r\ntransformers 4.29.2\r\ntriton 2.0.0\r\ntypes-python-dateutil 2.8.19.14\r\ntyping_extensions 4.8.0\r\ntyping-inspect 0.9.0\r\ntzdata 2023.3\r\nuri-template 1.3.0\r\nurllib3 2.0.7\r\nwandb 0.15.12\r\nwcwidth 0.2.8\r\nwebcolors 1.13\r\nwebdataset 0.2.62\r\nwebencodings 0.5.1\r\nwebsocket-client 1.6.4\r\nWerkzeug 3.0.0\r\nwheel 0.41.2\r\nxformers 0.0.20\r\nxxhash 3.4.1\r\nyarl 1.9.2\r\nzipp 3.17.0\r\n```",
"@humpydonkey FWIW setting fsspec down to 2023.9.2 fixed the issue\r\n\r\n`pip install fsspec==2023.9.2`",
"got it, thanks @ZachNagengast ",
"Thanks for reporting and for the investigation, @ZachNagengast! :hugs: \r\n\r\nWe are investigating the root cause of the issue. In the meantime, we are going to pin fsspec < 2023.10.0. ",
"https://stackoverflow.com/questions/77433096/notimplementederror-loading-a-dataset-cached-in-a-localfilesystem-is-not-suppor/77433141#77433141",
"You can also update `datasets`:\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nIt will also update `fsspec` to use the right version",
"This seems to work fine in 2.19.0. Hopefully it will not break again",
"not working for 2.19.1 !"
] | 1970-01-01T00:00:00.000001 | 1,715 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
Loading a streaming dataset with this version of fsspec fails with the following error:
`NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.`
I suspect the issue is with this PR
https://github.com/fsspec/filesystem_spec/pull/1381
### Steps to reproduce the bug
1. Upgrade fsspec to version `2023.10.0`
2. Attempt to load a streaming dataset e.g. `load_dataset("laion/gpt4v-emotion-dataset", split="train", streaming=True)`
3. Observe the following exception:
```
File "/opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/site-packages/datasets/load.py", line 2146, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/site-packages/datasets/builder.py", line 1318, in as_streaming_dataset
raise NotImplementedError(
NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.
```
### Expected behavior
Should stream the dataset as normal.
### Environment info
datasets@main
fsspec==2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6330/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6330/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6329/comments | https://api.github.com/repos/huggingface/datasets/issues/6329/events | https://github.com/huggingface/datasets/issues/6329 | 1,955,858,020 | I_kwDODunzps50lAZk | 6,329 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | {
"avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4",
"events_url": "https://api.github.com/users/shabnam706/events{/privacy}",
"followers_url": "https://api.github.com/users/shabnam706/followers",
"following_url": "https://api.github.com/users/shabnam706/following{/other_user}",
"gists_url": "https://api.github.com/users/shabnam706/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shabnam706",
"id": 147399213,
"login": "shabnam706",
"node_id": "U_kgDOCMkiLQ",
"organizations_url": "https://api.github.com/users/shabnam706/orgs",
"received_events_url": "https://api.github.com/users/shabnam706/received_events",
"repos_url": "https://api.github.com/users/shabnam706/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shabnam706/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shabnam706/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shabnam706",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,698 | 1970-01-01T00:00:00.000001 | NONE | null | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6329/timeline | null | completed | null | null | false | 0 |
Subsets and Splits