url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 51
51
| id
int64 1.92B
2.7B
| node_id
stringlengths 18
18
| number
int64 6.27k
7.3k
| title
stringlengths 2
150
| user
dict | labels
listlengths 0
2
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
1
| milestone
null | comments
sequencelengths 0
23
| created_at
timestamp[ns] | updated_at
int64 1.7k
1.73k
| closed_at
timestamp[ns] | author_association
stringclasses 4
values | active_lock_reason
null | body
stringlengths 3
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
null | pull_request
null | is_pull_request
bool 1
class | time_to_close
float64 0
0
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6702/comments | https://api.github.com/repos/huggingface/datasets/issues/6702/events | https://github.com/huggingface/datasets/issues/6702 | 2,161,938,484 | I_kwDODunzps6A3JA0 | 6,702 | Push samples to dataset on hub without having the dataset locally | {
"avatar_url": "https://avatars.githubusercontent.com/u/17854096?v=4",
"events_url": "https://api.github.com/users/jbdel/events{/privacy}",
"followers_url": "https://api.github.com/users/jbdel/followers",
"following_url": "https://api.github.com/users/jbdel/following{/other_user}",
"gists_url": "https://api.github.com/users/jbdel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbdel",
"id": 17854096,
"login": "jbdel",
"node_id": "MDQ6VXNlcjE3ODU0MDk2",
"organizations_url": "https://api.github.com/users/jbdel/orgs",
"received_events_url": "https://api.github.com/users/jbdel/received_events",
"repos_url": "https://api.github.com/users/jbdel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbdel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbdel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbdel",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi ! For now I would recommend creating a new Parquet file using `dataset_new.to_parquet()` and upload it to HF using `huggingface_hub` every time you get a new batch of data. You can name the Parquet files `0000.parquet`, `0001.parquet`, etc.\r\n\r\nThough maybe make sure to not upload one file per sample since that would be inefficient. You can buffer your data and upload when you have enough new samples for example",
"This is excellent, thanks!"
] | 1970-01-01T00:00:00.000001 | 1,709 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
Say I have the following code:
```
from datasets import Dataset
import pandas as pd
new_data = {
"column_1": ["value1", "value2"],
"column_2": ["value3", "value4"],
}
df_new = pd.DataFrame(new_data)
dataset_new = Dataset.from_pandas(df_new)
# add these samples to a remote dataset
```
It would be great to have a way to push dataset_new to a remote dataset that respects the same schema. This way one would not have to do the following:
```
from datasets import load_dataset
dataset = load_dataset('username/dataset_name', use_auth_token='your_hf_token_here')
updated_dataset = dataset['train'].concatenate(dataset_new)
updated_dataset.push_to_hub('username/dataset_name', use_auth_token='your_hf_token_here')
```
### Motivation
No need to download the dataset.
### Your contribution
Maybe this feature already exists, didnt see it though. I do not have the expertise to do this. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17854096?v=4",
"events_url": "https://api.github.com/users/jbdel/events{/privacy}",
"followers_url": "https://api.github.com/users/jbdel/followers",
"following_url": "https://api.github.com/users/jbdel/following{/other_user}",
"gists_url": "https://api.github.com/users/jbdel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbdel",
"id": 17854096,
"login": "jbdel",
"node_id": "MDQ6VXNlcjE3ODU0MDk2",
"organizations_url": "https://api.github.com/users/jbdel/orgs",
"received_events_url": "https://api.github.com/users/jbdel/received_events",
"repos_url": "https://api.github.com/users/jbdel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbdel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbdel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbdel",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6702/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6702/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6700/comments | https://api.github.com/repos/huggingface/datasets/issues/6700/events | https://github.com/huggingface/datasets/issues/6700 | 2,158,871,038 | I_kwDODunzps6ArcH- | 6,700 | remove_columns is not in-place but the doc shows it is in-place | {
"avatar_url": "https://avatars.githubusercontent.com/u/32047804?v=4",
"events_url": "https://api.github.com/users/shelfofclub/events{/privacy}",
"followers_url": "https://api.github.com/users/shelfofclub/followers",
"following_url": "https://api.github.com/users/shelfofclub/following{/other_user}",
"gists_url": "https://api.github.com/users/shelfofclub/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shelfofclub",
"id": 32047804,
"login": "shelfofclub",
"node_id": "MDQ6VXNlcjMyMDQ3ODA0",
"organizations_url": "https://api.github.com/users/shelfofclub/orgs",
"received_events_url": "https://api.github.com/users/shelfofclub/received_events",
"repos_url": "https://api.github.com/users/shelfofclub/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shelfofclub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shelfofclub/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shelfofclub",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Good catch! I've opened a PR with a fix in the `transformers` repo.",
"@mariosasko Thanks!\r\n\r\nWill the doc of `datasets` be updated?\r\n\r\nI find some possible mistakes in doc about whether `remove_columns` is in-place.\r\n1. [You can also remove a column using map() with remove_columns but the present method is in-place (doesn’t copy the data to a new dataset) and is thus faster.](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.Dataset.remove_columns)\r\n2. [You can also remove a column using Dataset.map() with remove_columns but the present method is in-place (doesn’t copy the data to a new dataset) and is thus faster.](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)\r\n3. [🤗 Datasets also has a remove_columns() function which is faster because it doesn’t copy the data of the remaining columns.](https://huggingface.co/docs/datasets/v2.17.1/en/process#map)",
"I've linked a PR that will fix the usage in the `datasets` docs."
] | 1970-01-01T00:00:00.000001 | 1,712 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)
In the text classification example of transformers v4.38.1, the columns are not removed.
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Steps to reproduce the bug
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Expected behavior
Actually remove the columns.
### Environment info
1. datasets v2.17.0
2. transformers v4.38.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArthurZucker",
"id": 48595927,
"login": "ArthurZucker",
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArthurZucker",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6700/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6699/comments | https://api.github.com/repos/huggingface/datasets/issues/6699/events | https://github.com/huggingface/datasets/issues/6699 | 2,158,152,341 | I_kwDODunzps6AosqV | 6,699 | `Dataset` unexpected changed dict data and may cause error | {
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/scruel",
"id": 16933298,
"login": "scruel",
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"repos_url": "https://api.github.com/users/scruel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/scruel",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"If `test.jsonl` contains more lines like:\r\n```\r\n{\"id\": 0, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 1, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 2, \"indexs\": {\"-2\": [0, 10]}}\r\n...\r\n{\"id\": n, \"indexs\": {\"-9999\": [0, 10]}}\r\n```\r\n\r\n`Dataset.from_json` will just raise an error:\r\n```\r\nAn error occurred while generating the dataset\r\nTypeError: Couldn't cast array of type\r\nstruct<-5942: list<item: int64>, -5943: list<item: int64>, -5944: list<item: int64>, -5945: list<item: int64>, -5946: list<item: int64>, -5947: list<item: int64>, -5948: list<item: int64>, -5949: list<item: int64>: ...\r\nto\r\n{... '-5312': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), '-5313': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/runpy.py\", line 88, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py\", line 39, in <module>\r\n cli.main()\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 284, in run_file\r\n runpy.run_path(target, run_name=\"__main__\")\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 321, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 135, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 124, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/scruel/Code/Python/Working/llm-memory/data_reader.py\", line 120, in <module>\r\n reader = SnippetReader(jsonl_path, npy_path)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/Code/Python/Working/llm-memory/data_reader.py\", line 85, in __init__\r\n self._dataset = Dataset.from_json(jsonl_path, features=)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/arrow_dataset.py\", line 1130, in from_json\r\n ).read()\r\n ^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/io/json.py\", line 59, in read\r\n self.builder.download_and_prepare(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1860, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 2016, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```",
"Hi! Our JSON parser expects all examples/rows to share the same set of columns (applies to nested columns, too), hence the error. \r\n\r\nTo read the `index` column, we would have to manually cast the input to PyArrow's `pa.map_` type, but this requires a more thorough investigation, as `pa.map_` has limited support in PyArrow."
] | 1970-01-01T00:00:00.000001 | 1,709 | null | NONE | null | ### Describe the bug
Will unexpected get keys with `None` value in the parsed json dict.
### Steps to reproduce the bug
```jsonl test.jsonl
{"id": 0, "indexs": {"-1": [0, 10]}}
{"id": 1, "indexs": {"-1": [0, 10]}}
```
```python
dataset = Dataset.from_json('.test.jsonl')
print(dataset[0])
```
Result:
```
{'id': 0, 'indexs': {'-1': [...], '-2': None, '-3': None, '-4': None, '-5': None, '-6': None, '-7': None, '-8': None, '-9': None, ...}}
```
Those keys with `None` value will unexpected appear in the dict.
### Expected behavior
Result should be
```
{'id': 0, 'indexs': {'-1': [0, 10]}}
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.6
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6699/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6697/comments | https://api.github.com/repos/huggingface/datasets/issues/6697/events | https://github.com/huggingface/datasets/issues/6697 | 2,157,322,224 | I_kwDODunzps6Alh_w | 6,697 | Unable to Load Dataset in Kaggle | {
"avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4",
"events_url": "https://api.github.com/users/vrunm/events{/privacy}",
"followers_url": "https://api.github.com/users/vrunm/followers",
"following_url": "https://api.github.com/users/vrunm/following{/other_user}",
"gists_url": "https://api.github.com/users/vrunm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vrunm",
"id": 97465624,
"login": "vrunm",
"node_id": "U_kgDOBc81GA",
"organizations_url": "https://api.github.com/users/vrunm/orgs",
"received_events_url": "https://api.github.com/users/vrunm/received_events",
"repos_url": "https://api.github.com/users/vrunm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vrunm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrunm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vrunm",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"FWIW, I run `load_dataset(\"llm-blender/mix-instruct\")` and it ran successfully.\r\nCan you clear your cache and try again?\r\n\r\n\r\n### Environment Info\r\n\r\n- `datasets` version: 2.17.0\r\n- Platform: Linux-6.2.6-76060206-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.13\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 1.5.3\r\n- `fsspec` version: 2023.10.0",
"It is working on the Kaggle GPU instance but gives this same error when running on the CPU instance. Still to run it on Kaggle you require to install the latest versions of datasets and transformers.",
"This error means that `fsspec>=2023.12.0` is installed, which is incompatible with the current releases (the next `datasets` release will be the first to support it). In the meantime, downgrading `fsspec` (`pip install fsspec<=2023.12.0`) should fix the issue.",
"@mariosasko Thanks I got it to work with installing that version of fsspec."
] | 1970-01-01T00:00:00.000001 | 1,709 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook.
Get this Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[8], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("llm-blender/mix-instruct")
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1661 ignore_verifications = ignore_verifications or save_infos
1663 # Create a dataset builder
-> 1664 builder_instance = load_dataset_builder(
1665 path=path,
1666 name=name,
1667 data_dir=data_dir,
1668 data_files=data_files,
1669 cache_dir=cache_dir,
1670 features=features,
1671 download_config=download_config,
1672 download_mode=download_mode,
1673 revision=revision,
1674 use_auth_token=use_auth_token,
1675 **config_kwargs,
1676 )
1678 # Return iterable dataset in case of streaming
1679 if streaming:
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1488 download_config = download_config.copy() if download_config else DownloadConfig()
1489 download_config.use_auth_token = use_auth_token
-> 1490 dataset_module = dataset_module_factory(
1491 path,
1492 revision=revision,
1493 download_config=download_config,
1494 download_mode=download_mode,
1495 data_dir=data_dir,
1496 data_files=data_files,
1497 )
1499 # Get dataset builder class from the processing script
1500 builder_cls = import_main_class(dataset_module.module_path)
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1242, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1237 if isinstance(e1, FileNotFoundError):
1238 raise FileNotFoundError(
1239 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1240 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1241 ) from None
-> 1242 raise e1 from None
1243 else:
1244 raise FileNotFoundError(
1245 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory."
1246 )
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1230, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1215 return HubDatasetModuleFactoryWithScript(
1216 path,
1217 revision=revision,
(...)
1220 dynamic_modules_path=dynamic_modules_path,
1221 ).get_module()
1222 else:
1223 return HubDatasetModuleFactoryWithoutScript(
1224 path,
1225 revision=revision,
1226 data_dir=data_dir,
1227 data_files=data_files,
1228 download_config=download_config,
1229 download_mode=download_mode,
-> 1230 ).get_module()
1231 except Exception as e1: # noqa: all the attempts failed, before raising the error we should check if the module is already cached.
1232 try:
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:846, in HubDatasetModuleFactoryWithoutScript.get_module(self)
836 token = self.download_config.use_auth_token
837 hfh_dataset_info = HfApi(config.HF_ENDPOINT).dataset_info(
838 self.name,
839 revision=self.revision,
840 token=token,
841 timeout=100.0,
842 )
843 patterns = (
844 sanitize_patterns(self.data_files)
845 if self.data_files is not None
--> 846 else get_patterns_in_dataset_repository(hfh_dataset_info)
847 )
848 data_files = DataFilesDict.from_hf_repo(
849 patterns,
850 dataset_info=hfh_dataset_info,
851 allowed_extensions=ALL_ALLOWED_EXTENSIONS,
852 )
853 infered_module_names = {
854 key: infer_module_for_data_files(data_files_list, use_auth_token=self.download_config.use_auth_token)
855 for key, data_files_list in data_files.items()
856 }
File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:471, in get_patterns_in_dataset_repository(dataset_info)
469 resolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info)
470 try:
--> 471 return _get_data_files_patterns(resolver)
472 except FileNotFoundError:
473 raise FileNotFoundError(
474 f"The dataset repository at '{dataset_info.id}' doesn't contain any data file."
475 ) from None
File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:99, in _get_data_files_patterns(pattern_resolver)
97 try:
98 for pattern in patterns:
---> 99 data_files = pattern_resolver(pattern)
100 if len(data_files) > 0:
101 non_empty_splits.append(split)
File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:303, in _resolve_single_pattern_in_dataset_repository(dataset_info, pattern, allowed_extensions)
301 data_files_ignore = FILES_TO_IGNORE
302 fs = HfFileSystem(repo_info=dataset_info)
--> 303 glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)]
304 matched_paths = [
305 filepath
306 for filepath in glob_iter
307 if filepath.name not in data_files_ignore and not filepath.name.startswith(".")
308 ]
309 if allowed_extensions is not None:
File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:606, in AbstractFileSystem.glob(self, path, maxdepth, **kwargs)
602 depth = None
604 allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
--> 606 pattern = glob_translate(path + ("/" if ends_with_sep else ""))
607 pattern = re.compile(pattern)
609 out = {
610 p: info
611 for p, info in sorted(allpaths.items())
(...)
618 )
619 }
File /opt/conda/lib/python3.10/site-packages/fsspec/utils.py:734, in glob_translate(pat)
732 continue
733 elif "**" in part:
--> 734 raise ValueError(
735 "Invalid pattern: '**' can only be an entire path component"
736 )
737 if part:
738 results.extend(_translate(part, f"{not_sep}*", not_sep))
ValueError: Invalid pattern: '**' can only be an entire path component
```
```
After loading this dataset
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("llm-blender/mix-instruct")
```
### Expected behavior
The dataset should load with desired split.
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4",
"events_url": "https://api.github.com/users/vrunm/events{/privacy}",
"followers_url": "https://api.github.com/users/vrunm/followers",
"following_url": "https://api.github.com/users/vrunm/following{/other_user}",
"gists_url": "https://api.github.com/users/vrunm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vrunm",
"id": 97465624,
"login": "vrunm",
"node_id": "U_kgDOBc81GA",
"organizations_url": "https://api.github.com/users/vrunm/orgs",
"received_events_url": "https://api.github.com/users/vrunm/received_events",
"repos_url": "https://api.github.com/users/vrunm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vrunm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrunm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vrunm",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6697/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6695/comments | https://api.github.com/repos/huggingface/datasets/issues/6695/events | https://github.com/huggingface/datasets/issues/6695 | 2,154,075,509 | I_kwDODunzps6AZJV1 | 6,695 | Support JSON file with an array of strings | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 has been fixed, but how can we check if there are other datasets with the same error, in datasets-server's database? I don't know how to get the list of erroneous cache entries, since we only copied `Error code: JobManagerCrashedError`, but not the traceback in `details`... Do you remember the error message, or the underlying exception, we had?"
] | 1970-01-01T00:00:00.000001 | 1,709 | 1970-01-01T00:00:00.000001 | MEMBER | null | Support loading a dataset from a JSON file with an array of strings.
See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6695/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6695/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6691/comments | https://api.github.com/repos/huggingface/datasets/issues/6691/events | https://github.com/huggingface/datasets/issues/6691 | 2,152,134,041 | I_kwDODunzps6ARvWZ | 6,691 | load_dataset() does not support tsv | {
"avatar_url": "https://avatars.githubusercontent.com/u/26873178?v=4",
"events_url": "https://api.github.com/users/dipsivenkatesh/events{/privacy}",
"followers_url": "https://api.github.com/users/dipsivenkatesh/followers",
"following_url": "https://api.github.com/users/dipsivenkatesh/following{/other_user}",
"gists_url": "https://api.github.com/users/dipsivenkatesh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dipsivenkatesh",
"id": 26873178,
"login": "dipsivenkatesh",
"node_id": "MDQ6VXNlcjI2ODczMTc4",
"organizations_url": "https://api.github.com/users/dipsivenkatesh/orgs",
"received_events_url": "https://api.github.com/users/dipsivenkatesh/received_events",
"repos_url": "https://api.github.com/users/dipsivenkatesh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dipsivenkatesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dipsivenkatesh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dipsivenkatesh",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"events_url": "https://api.github.com/users/harsh1504660/events{/privacy}",
"followers_url": "https://api.github.com/users/harsh1504660/followers",
"following_url": "https://api.github.com/users/harsh1504660/following{/other_user}",
"gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harsh1504660",
"id": 77767961,
"login": "harsh1504660",
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"organizations_url": "https://api.github.com/users/harsh1504660/orgs",
"received_events_url": "https://api.github.com/users/harsh1504660/received_events",
"repos_url": "https://api.github.com/users/harsh1504660/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harsh1504660",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"events_url": "https://api.github.com/users/harsh1504660/events{/privacy}",
"followers_url": "https://api.github.com/users/harsh1504660/followers",
"following_url": "https://api.github.com/users/harsh1504660/following{/other_user}",
"gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harsh1504660",
"id": 77767961,
"login": "harsh1504660",
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"organizations_url": "https://api.github.com/users/harsh1504660/orgs",
"received_events_url": "https://api.github.com/users/harsh1504660/received_events",
"repos_url": "https://api.github.com/users/harsh1504660/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harsh1504660",
"user_view_type": "public"
}
] | null | [
"#self-assign",
"Hi @dipsivenkatesh,\r\n\r\nPlease note that this functionality is already implemented. Our CSV builder uses `pandas.read_csv` under the hood, and you can pass the parameter `delimiter=\"\\t\"` to read TSV files.\r\n\r\nSee the list of CSV config parameters in our docs: https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.packaged_modules.csv.CsvConfig"
] | 1970-01-01T00:00:00.000001 | 1,708 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values).
### Motivation
cant easily load files of type tsv, have to convert them to another type like csv then load
### Your contribution
Can try by raising a PR with a little help, currently went through the code but didn't fully understand | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6691/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6690/comments | https://api.github.com/repos/huggingface/datasets/issues/6690/events | https://github.com/huggingface/datasets/issues/6690 | 2,150,800,065 | I_kwDODunzps6AMprB | 6,690 | Add function to convert a script-dataset to Parquet | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,712 | 1970-01-01T00:00:00.000001 | MEMBER | null | Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet" | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6690/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6690/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6689/comments | https://api.github.com/repos/huggingface/datasets/issues/6689/events | https://github.com/huggingface/datasets/issues/6689 | 2,149,581,147 | I_kwDODunzps6AIAFb | 6,689 | .load_dataset() method defaults to zstandard | {
"avatar_url": "https://avatars.githubusercontent.com/u/87243032?v=4",
"events_url": "https://api.github.com/users/ElleLeonne/events{/privacy}",
"followers_url": "https://api.github.com/users/ElleLeonne/followers",
"following_url": "https://api.github.com/users/ElleLeonne/following{/other_user}",
"gists_url": "https://api.github.com/users/ElleLeonne/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ElleLeonne",
"id": 87243032,
"login": "ElleLeonne",
"node_id": "MDQ6VXNlcjg3MjQzMDMy",
"organizations_url": "https://api.github.com/users/ElleLeonne/orgs",
"received_events_url": "https://api.github.com/users/ElleLeonne/received_events",
"repos_url": "https://api.github.com/users/ElleLeonne/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ElleLeonne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElleLeonne/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ElleLeonne",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The dataset is made of JSON files compressed using zstandard, as you can see here: https://huggingface.co/datasets/cerebras/SlimPajama-627B/tree/main/test/chunk1\r\n\r\nThat's why it asks for zstandard to be installed.\r\n\r\nThough I'm intrigued that you manage to load the dataset without zstandard installed. Maybe `pyarrow` that we use to load JSON data under the hood got support for zstandard at one point.",
"> The dataset is made of JSON files compressed using zstandard, as you can see here: https://huggingface.co/datasets/cerebras/SlimPajama-627B/tree/main/test/chunk1\r\n> \r\n> That's why it asks for zstandard to be installed.\r\n> \r\n> Though I'm intrigued that you manage to load the dataset without zstandard installed. Maybe `pyarrow` that we use to load JSON data under the hood got support for zstandard at one point.\r\n\r\nQuestion, then.\r\n\r\nWhen I loaded this dataset back in October, it downloaded all the files, and then loaded into memory just fine.\r\n\r\nNOW, it has to sit there and unpack all these zstd files (3.6TB worth). Further, when they're in my harddrive, they're regular json files. It's only when looking at the LFS, or when the loading script runs, that I get asked to install zstd.\r\n\r\nMy question is, **is this normal?** As far as I can tell, there's no reason the dataset or the loading methods should have changed between then and now. Was my old behavior flawed, and the new behavior correct?\r\n\r\nI mean, I got it working eventually, but it was pulling teeth, and it still doesn't load right, as I had to unpack each chunk separately, so there's no clean mapping between the chunks and the broader dataset.",
"The `ZstdExtractor` has been added 3 years ago and we haven't touched it since then. Same for the JSON loader.\r\n\r\n`zstandard` is required as soon as you try to load a file with the `.zstd` extension or if a file starts with the Zstandard magic number `b\"\\x28\\xb5\\x2f\\xfd\"` (used to recognize Zstandard files).\r\n\r\nNote that the extraction only has to happen once - if you reload the dataset it will be reloaded from your cache directly.\r\n\r\nNot sure what happened between October and now unfortunately",
"Understood, thank you for clarifying that for me.\r\n\r\nI'll look into how best to collate my stack of batches w/o creating duplicate arrow tables for each one."
] | 1970-01-01T00:00:00.000001 | 1,709 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets.
This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it happens on datasets that are uploaded in json format too, meaning the dataset loader will attempt to convert the data to a zstandard compatible format, and THEN try to unpackage it.
My 4tb drive runs out of room when using zstandard on slimpajama. It loads fine on 1.5tb when using json, however I lack the understanding of the "magic numbers" system used to select the unpackaging algorithm, so I can't push a change myself.
Commenting out this line, in "/datasets/utils/extract.py" fixes the issue, and causes SlimPajama to properly extract using rational amounts of storage, however it completely disables zstandard, which is probably undesirable behavior. Someone with an understanding of the "magic numbers" system should probably take a pass over this issue.
```
class Extractor:
# Put zip file to the last, b/c it is possible wrongly detected as zip (I guess it means: as tar or gzip)
extractors: Dict[str, Type[BaseExtractor]] = {
"tar": TarExtractor,
"gzip": GzipExtractor,
"zip": ZipExtractor,
"xz": XzExtractor,
#"zstd": ZstdExtractor, # This line needs to go, in order for datasets to work w/o non-dependent packages
"rar": RarExtractor,
"bz2": Bzip2Extractor,
"7z": SevenZipExtractor, # <Added version="2.4.0"/>
"lz4": Lz4Extractor, # <Added version="2.4.0"/>
}
```
### Steps to reproduce the bug
'''
from datasaets import load_dataset
load_dataset(path="/cerebras/SlimPajama-627B")
'''
This alone should trigger the error on any system that does not have zstandard pip installed.
### Expected behavior
This repository (which is encoded in json format, not zstandard) should check whether zstandard is installed before defaulting to it. Additionally, using zstandard should not use more than 3x the required space that other extraction mechanisms use.
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
- Python version: 3.12.0
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/87243032?v=4",
"events_url": "https://api.github.com/users/ElleLeonne/events{/privacy}",
"followers_url": "https://api.github.com/users/ElleLeonne/followers",
"following_url": "https://api.github.com/users/ElleLeonne/following{/other_user}",
"gists_url": "https://api.github.com/users/ElleLeonne/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ElleLeonne",
"id": 87243032,
"login": "ElleLeonne",
"node_id": "MDQ6VXNlcjg3MjQzMDMy",
"organizations_url": "https://api.github.com/users/ElleLeonne/orgs",
"received_events_url": "https://api.github.com/users/ElleLeonne/received_events",
"repos_url": "https://api.github.com/users/ElleLeonne/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ElleLeonne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElleLeonne/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ElleLeonne",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6689/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6689/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6688/comments | https://api.github.com/repos/huggingface/datasets/issues/6688/events | https://github.com/huggingface/datasets/issues/6688 | 2,148,609,859 | I_kwDODunzps6AES9D | 6,688 | Tensor type (e.g. from `return_tensors`) ignored in map | {
"avatar_url": "https://avatars.githubusercontent.com/u/11166137?v=4",
"events_url": "https://api.github.com/users/srossi93/events{/privacy}",
"followers_url": "https://api.github.com/users/srossi93/followers",
"following_url": "https://api.github.com/users/srossi93/following{/other_user}",
"gists_url": "https://api.github.com/users/srossi93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srossi93",
"id": 11166137,
"login": "srossi93",
"node_id": "MDQ6VXNlcjExMTY2MTM3",
"organizations_url": "https://api.github.com/users/srossi93/orgs",
"received_events_url": "https://api.github.com/users/srossi93/received_events",
"repos_url": "https://api.github.com/users/srossi93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srossi93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srossi93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srossi93",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi, this is expected behavior since all the tensors are converted to Arrow data (the storage type behind a Dataset).\r\n\r\nTo get pytorch tensors back, you can set the dataset format to \"torch\":\r\n\r\n```python\r\nds = ds.with_format(\"torch\")\r\n```",
"Thanks. Just one additional question. During the pipeline `<framework> -> arrow -> <framework>`, does `.with_format` zero-copies the tensors or is it a deep copy? And is this behavior framework-dependent?\r\n\r\nThanks again.",
"We do zero-copy Arrow <-> NumPy <-> PyTorch when the output dtype matches the original dtype, but for other frameworks it depends. For example JAX doesn't allow zero-copy NumPy -> JAX at all IIRC.\r\n\r\nCurrently tokenized data are formatted using a copy though, since tokens are stored as int32 and returned as int64 torch tensors."
] | 1970-01-01T00:00:00.000001 | 1,708 | null | NONE | null | ### Describe the bug
I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument.
If this is an expected behaviour (e.g., for caching/Arrow compatibility/etc.) it should be clearly documented. For example, current documentation (see [here](https://huggingface.co/docs/datasets/v2.17.1/en/nlp_process#map)) clearly state to "set `return_tensors="np"` when you tokenize your text" to have Numpy arrays.
### Steps to reproduce the bug
```py
# %%%
import datasets
import numpy as np
import tensorflow as tf
import torch
from transformers import AutoTokenizer
# %%
ds = datasets.load_dataset("cnn_dailymail", "1.0.0", split="train[:1%]")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
#%%
for return_tensors in [None, "np", "pt", "tf", "jax"]:
print(f"********** no map, return_tensors={return_tensors} **********")
_ds = tokenizer(ds["article"], return_tensors=return_tensors, truncation=True, padding=True)
print('Type <input_ids>:', type(_ds["input_ids"]))
# %%
for return_tensors in [None, "np", "pt", "tf", "jax"]:
print(f"********** map, return_tensors={return_tensors} **********")
_ds = ds.map(
lambda examples: tokenizer(examples["article"], return_tensors=return_tensors, truncation=True, padding=True),
batched=True,
remove_columns=["article"],
)
print('Type <input_ids>:', type(_ds[0]["input_ids"]))
```
### Expected behavior
The output from the script above. I would expect the second half to be the same.
```
********** no map, return_tensors=None **********
Type <input_ids>: <class 'list'>
********** no map, return_tensors=np **********
Type <input_ids>: <class 'numpy.ndarray'>
********** no map, return_tensors=pt **********
Type <input_ids>: <class 'torch.Tensor'>
********** no map, return_tensors=tf **********
Type <input_ids>: <class 'tensorflow.python.framework.ops.EagerTensor'>
********** no map, return_tensors=jax **********
Type <input_ids>: <class 'jaxlib.xla_extension.ArrayImpl'>
********** map, return_tensors=None **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=np **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=pt **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=tf **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=jax **********
Type <input_ids>: <class 'list'>
```
### Environment info
- `datasets` version: 2.17.1
- Platform: Redacted (linux)
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6688/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6686/comments | https://api.github.com/repos/huggingface/datasets/issues/6686/events | https://github.com/huggingface/datasets/issues/6686 | 2,147,795,103 | I_kwDODunzps6ABMCf | 6,686 | Question: Is there any way for uploading a large image dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/37367987?v=4",
"events_url": "https://api.github.com/users/zhjohnchan/events{/privacy}",
"followers_url": "https://api.github.com/users/zhjohnchan/followers",
"following_url": "https://api.github.com/users/zhjohnchan/following{/other_user}",
"gists_url": "https://api.github.com/users/zhjohnchan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhjohnchan",
"id": 37367987,
"login": "zhjohnchan",
"node_id": "MDQ6VXNlcjM3MzY3OTg3",
"organizations_url": "https://api.github.com/users/zhjohnchan/orgs",
"received_events_url": "https://api.github.com/users/zhjohnchan/received_events",
"repos_url": "https://api.github.com/users/zhjohnchan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhjohnchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhjohnchan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhjohnchan",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"```\r\nimport pandas as pd\r\nfrom datasets import Dataset, Image\r\n\r\n# Read the CSV file\r\ndata = pd.read_csv(\"XXXX.csv\")\r\n\r\n# Create a Hugging Face Dataset\r\ndataset = Dataset.from_pandas(data)\r\ndataset = dataset.cast_column(\"file_name\", Image())\r\n\r\n# Upload to Hugging Face Hub (make sure authentication is set up)\r\ndataset.push_to_hub(\"XXXXX\"\")\r\n```\r\n\r\nstuck in \"Casting the dataset\r\n\r\n\"\r\n"
] | 1970-01-01T00:00:00.000001 | 1,714 | null | NONE | null | I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_size="1GB")
```
where it takes a long time in the `Map` process. Do you think I can use multi-processing to map all the image data to the memory first? For the `Map()` function, I can set `num_proc`. But for `push_to_hub` and `cast_column`, I can not find it.
Thanks in advance!
Best, | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6686/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6679/comments | https://api.github.com/repos/huggingface/datasets/issues/6679/events | https://github.com/huggingface/datasets/issues/6679 | 2,141,953,981 | I_kwDODunzps5_q5-9 | 6,679 | Node.js 16 GitHub Actions are deprecated | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,709 | 1970-01-01T00:00:00.000001 | MEMBER | null | `Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
We should update them to Node 20.
See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678
> Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: actions/checkout@v3, actions/setup-python@v4. For more information see: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6679/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6676/comments | https://api.github.com/repos/huggingface/datasets/issues/6676/events | https://github.com/huggingface/datasets/issues/6676 | 2,140,648,619 | I_kwDODunzps5_l7Sr | 6,676 | Can't Read List of JSON Files Properly | {
"avatar_url": "https://avatars.githubusercontent.com/u/20232088?v=4",
"events_url": "https://api.github.com/users/lordsoffallen/events{/privacy}",
"followers_url": "https://api.github.com/users/lordsoffallen/followers",
"following_url": "https://api.github.com/users/lordsoffallen/following{/other_user}",
"gists_url": "https://api.github.com/users/lordsoffallen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordsoffallen",
"id": 20232088,
"login": "lordsoffallen",
"node_id": "MDQ6VXNlcjIwMjMyMDg4",
"organizations_url": "https://api.github.com/users/lordsoffallen/orgs",
"received_events_url": "https://api.github.com/users/lordsoffallen/received_events",
"repos_url": "https://api.github.com/users/lordsoffallen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordsoffallen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordsoffallen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordsoffallen",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Found the issue, if there are other files in the directory, it gets caught into this `*` so essentially it should be `*.json`. Could we possibly to check for list of files to make sure the pattern matches json files and raise error if not?",
"I don't think we should filter for `*.json` as this might silently remove desired files for many users. And this could be a major breaking change for many organizations.\r\n\r\nYou could do the globbing yourself which would keep the code clean.\r\n\r\n```python\r\nfrom glob import glob\r\n\r\nDataset.from_json(glob('folder/*.json'))\r\n```",
"I think it should still be fine to log a warning message in case the folder contains different files? I also don't get why would this be breaking as in the end using `from_FILE_TYPE` should be able to read a specific file type only. Maybe some other use case I am not aware of but since globbing or this case not mentioned anywhere in the doc, I spent quite a bit of time trying to figure out where the issue was. Just making sure it's clear for users."
] | 1970-01-01T00:00:00.000001 | 1,709 | null | NONE | null | ### Describe the bug
Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging:
The code fails with
```
ArrowInvalid: JSON parse error: Invalid value. in row 0
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
This doesn't work
```
from datasets import Dataset
# dir contains 100 json files.
Dataset.from_json("/PUT SOME PATH HERE/*")
```
This works:
```
from datasets import concatenate_datasets
ls_ds = []
for file in list_of_json_files:
ls_ds.append(Dataset.from_json(file))
ds = concatenate_datasets(ls_ds)
```
### Expected behavior
I expect this to read json files properly as error is not clear
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6676/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6676/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6675/comments | https://api.github.com/repos/huggingface/datasets/issues/6675/events | https://github.com/huggingface/datasets/issues/6675 | 2,139,640,381 | I_kwDODunzps5_iFI9 | 6,675 | Allow image model (color conversion) to be specified as part of datasets Image() decode | {
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"events_url": "https://api.github.com/users/rwightman/events{/privacy}",
"followers_url": "https://api.github.com/users/rwightman/followers",
"following_url": "https://api.github.com/users/rwightman/following{/other_user}",
"gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwightman",
"id": 5702664,
"login": "rwightman",
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"organizations_url": "https://api.github.com/users/rwightman/orgs",
"received_events_url": "https://api.github.com/users/rwightman/received_events",
"repos_url": "https://api.github.com/users/rwightman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwightman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwightman",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"It would be a great addition indeed :)\r\n\r\nThis can be implemented the same way we have `sampling_rate` for Audio(): we just add a new parameter to the Image() type and take this parameter into account in `Image.decode_example`\r\n\r\nEDIT: adding an example of how it can be used:\r\n\r\n```python\r\nds = ds.cast_column(\"image\", Image(mode=...))\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,710 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.org/vision/main/generated/torchvision.io.decode_jpeg.html, and similarly in tensorflow.data pipelines decode_jpeg or https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg have a channels arg that allows controlling the image mode in the decode step.
datasets currently requires this pattern (from [examples](https://huggingface.co/docs/datasets/main/en/image_process)):
```
from torchvision.transforms import Compose, ColorJitter, ToTensor
jitter = Compose(
[
ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.7),
ToTensor(),
]
)
def transforms(examples):
examples["pixel_values"] = [jitter(image.convert("RGB")) for image in examples["image"]]
return examples
```
### Motivation
It would be nice to be able to handle `image.convert("RGB")` (or other modes) in the decode step, before applying torchvision transforms, this would reduce differences in code when handling pipelines that can handle torchvision, webdatset, or hf datasets with fewer code differences and without needing to handle image mode argument passing in two different stages of the pipelines...
### Your contribution
Can do a PR with guidance on how mode should be passed / set on the dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6675/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6675/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6674/comments | https://api.github.com/repos/huggingface/datasets/issues/6674/events | https://github.com/huggingface/datasets/issues/6674 | 2,139,595,576 | I_kwDODunzps5_h6M4 | 6,674 | Depprcated Overview.ipynb Link to new Quickstart Notebook invalid | {
"avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4",
"events_url": "https://api.github.com/users/Codeblockz/events{/privacy}",
"followers_url": "https://api.github.com/users/Codeblockz/followers",
"following_url": "https://api.github.com/users/Codeblockz/following{/other_user}",
"gists_url": "https://api.github.com/users/Codeblockz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Codeblockz",
"id": 55932554,
"login": "Codeblockz",
"node_id": "MDQ6VXNlcjU1OTMyNTU0",
"organizations_url": "https://api.github.com/users/Codeblockz/orgs",
"received_events_url": "https://api.github.com/users/Codeblockz/received_events",
"repos_url": "https://api.github.com/users/Codeblockz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Codeblockz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Codeblockz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Codeblockz",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Good catch! Feel free to open a PR to fix the link."
] | 1970-01-01T00:00:00.000001 | 1,708 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken.
### Steps to reproduce the bug
Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quickstart.ipynb) link in the notebook.
### Expected behavior
I believe is it suposed to link [here](https://github.com/huggingface/notebooks/blob/main/datasets_doc/en/quickstart.ipynb). That is mentioned in the readme.
### Environment info
Colab | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6674/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6673/comments | https://api.github.com/repos/huggingface/datasets/issues/6673/events | https://github.com/huggingface/datasets/issues/6673 | 2,139,522,827 | I_kwDODunzps5_hocL | 6,673 | IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True` | {
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"events_url": "https://api.github.com/users/rwightman/events{/privacy}",
"followers_url": "https://api.github.com/users/rwightman/followers",
"following_url": "https://api.github.com/users/rwightman/following{/other_user}",
"gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwightman",
"id": 5702664,
"login": "rwightman",
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"organizations_url": "https://api.github.com/users/rwightman/orgs",
"received_events_url": "https://api.github.com/users/rwightman/received_events",
"repos_url": "https://api.github.com/users/rwightman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwightman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwightman",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes.
PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does not.
In my own use of IterableDatasets I usually track the epoch count which crosses process boundaries in a multiprocessing.Value
### Steps to reproduce the bug
Use a streaming dataset (Iterable) w/ the recommended pattern below and `persistent_workers=True` in the torch DataLoader.
```
for epoch in range(epochs):
shuffled_dataset.set_epoch(epoch)
for example in shuffled_dataset:
...
```
### Expected behavior
When the canonical bit of code above is used with `num_workers > 0` and `persistent_workers=True`, the epoch set via `set_epoch()` is propagated to the IterableDataset instances in the worker processes
### Environment info
N/A | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6673/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6673/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6671/comments | https://api.github.com/repos/huggingface/datasets/issues/6671/events | https://github.com/huggingface/datasets/issues/6671 | 2,138,727,870 | I_kwDODunzps5_emW- | 6,671 | CSV builder raises deprecation warning on verbose parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,708 | 1970-01-01T00:00:00.000001 | MEMBER | null | CSV builder raises a deprecation warning on `verbose` parameter:
```
FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version.
```
See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6671/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6671/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6670/comments | https://api.github.com/repos/huggingface/datasets/issues/6670/events | https://github.com/huggingface/datasets/issues/6670 | 2,138,372,958 | I_kwDODunzps5_dPte | 6,670 | ValueError | {
"avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4",
"events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}",
"followers_url": "https://api.github.com/users/prashanth19bolukonda/followers",
"following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}",
"gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prashanth19bolukonda",
"id": 112316000,
"login": "prashanth19bolukonda",
"node_id": "U_kgDOBrHOYA",
"organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs",
"received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events",
"repos_url": "https://api.github.com/users/prashanth19bolukonda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prashanth19bolukonda",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi @prashanth19bolukonda,\r\n\r\nYou have to restart the notebook runtime session after the installation of `datasets`.\r\n\r\nDuplicate of:\r\n- #5923",
"Thank you soo much\r\n\r\nOn Fri, Feb 16, 2024 at 8:14 PM Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> Closed #6670 <https://github.com/huggingface/datasets/issues/6670> as\r\n> completed.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6670#event-11829788289>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A2Y44YDQOBUFUWMR4C5O3QTYT5WDJAVCNFSM6AAAAABDL24S5SVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJRHAZDSNZYHAZDQOI>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | 1970-01-01T00:00:00.000001 | 1,708 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
ValueError Traceback (most recent call last)
[<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>()
9 import numpy as np
10 import matplotlib.pyplot as plt
---> 11 from datasets import DatasetDict, Dataset
12 from transformers import AutoTokenizer, AutoModelForSequenceClassification
13 from transformers import Trainer, TrainingArguments
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
16 __version__ = "2.17.0"
17
---> 18 from .arrow_dataset import Dataset
19 from .arrow_reader import ReadInstruction
20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
65
66 from . import config
---> 67 from .arrow_reader import ArrowReader
68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
69 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
27
28 import pyarrow as pa
---> 29 import pyarrow.parquet as pq
30 from tqdm.contrib.concurrent import thread_map
31
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module>
18 # flake8: noqa
19
---> 20 from .core import *
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module>
34 import pyarrow as pa
35 import pyarrow.lib as lib
---> 36 import pyarrow._parquet as _parquet
37
38 from pyarrow._parquet import (ParquetReader, Statistics, # noqa
/usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet()
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Expected behavior
Resolve the binary incompatibility
### Environment info
Google Colab Note book | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6670/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6670/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6669/comments | https://api.github.com/repos/huggingface/datasets/issues/6669/events | https://github.com/huggingface/datasets/issues/6669 | 2,138,322,662 | I_kwDODunzps5_dDbm | 6,669 | attribute error when writing trainer.train() | {
"avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4",
"events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}",
"followers_url": "https://api.github.com/users/prashanth19bolukonda/followers",
"following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}",
"gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prashanth19bolukonda",
"id": 112316000,
"login": "prashanth19bolukonda",
"node_id": "U_kgDOBrHOYA",
"organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs",
"received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events",
"repos_url": "https://api.github.com/users/prashanth19bolukonda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prashanth19bolukonda",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! Kaggle notebooks use an outdated version of `datasets`, so you should update the `datasets` installation (with `!pip install -U datasets`) to avoid the error.",
"Thank you for your response\r\n\r\nOn Thu, Feb 29, 2024 at 10:55 PM Mario Šaško ***@***.***>\r\nwrote:\r\n\r\n> Closed #6669 <https://github.com/huggingface/datasets/issues/6669> as\r\n> completed.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6669#event-11969246964>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A2Y44YG2RRVMYONNKPLBVE3YV5SAPAVCNFSM6AAAAABDLZ3BTSVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJRHE3DSMRUGY4TMNA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | 1970-01-01T00:00:00.000001 | 1,709 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
AttributeError Traceback (most recent call last)
Cell In[39], line 2
1 # Start the training process
----> 2 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1537 hf_hub_utils.enable_progress_bars()
1538 else:
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
1542 trial=trial,
1543 ignore_keys_for_eval=ignore_keys_for_eval,
1544 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1833 rng_to_sync = True
1835 step = -1
-> 1836 for step, inputs in enumerate(epoch_iterator):
1837 total_batched_samples += 1
1839 if self.args.include_num_input_tokens_seen:
File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self)
449 # We iterate one batch ahead to check when we are at the end
450 try:
--> 451 current_batch = next(dataloader_iter)
452 except StopIteration:
453 yield
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
633 self._IterableDataset_len_called is not None and \
634 self._num_yielded > self._IterableDataset_len_called:
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self)
672 def _next_data(self):
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key)
1762 def __getitem__(self, key): # noqa: F811
1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 1764 return self._getitem(
1765 key,
1766 )
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs)
1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 1749 formatted_output = format_table(
1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1751 )
1752 return formatted_output
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns)
538 else:
539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns)
--> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type)
541 if output_all_columns:
542 if isinstance(formatted_output, MutableMapping):
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table)
58 return self.recursive_tensorize(row)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table)
153 def extract_row(self, pa_table: pa.Table) -> dict:
--> 154 return _unnest(self.extract_batch(pa_table))
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
--> 196 if any(
197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
196 if any(
--> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecationsAttributeError Traceback (most recent call last)
Cell In[39], line 2
1 # Start the training process
----> 2 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1537 hf_hub_utils.enable_progress_bars()
1538 else:
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
1542 trial=trial,
1543 ignore_keys_for_eval=ignore_keys_for_eval,
1544 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1833 rng_to_sync = True
1835 step = -1
-> 1836 for step, inputs in enumerate(epoch_iterator):
1837 total_batched_samples += 1
1839 if self.args.include_num_input_tokens_seen:
File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self)
449 # We iterate one batch ahead to check when we are at the end
450 try:
--> 451 current_batch = next(dataloader_iter)
452 except StopIteration:
453 yield
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
633 self._IterableDataset_len_called is not None and \
634 self._num_yielded > self._IterableDataset_len_called:
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self)
672 def _next_data(self):
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key)
1762 def __getitem__(self, key): # noqa: F811
1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 1764 return self._getitem(
1765 key,
1766 )
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs)
1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 1749 formatted_output = format_table(
1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1751 )
1752 return formatted_output
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns)
538 else:
539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns)
--> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type)
541 if output_all_columns:
542 if isinstance(formatted_output, MutableMapping):
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table)
58 return self.recursive_tensorize(row)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table)
153 def extract_row(self, pa_table: pa.Table) -> dict:
--> 154 return _unnest(self.extract_batch(pa_table))
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
--> 196 if any(
197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
196 if any(
--> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
Please help me to resolve the above error
### Steps to reproduce the bug
Please resolve the issue of deprecated function np.object to object in the numpy
### Expected behavior
np.object should be written as object only
### Environment info
kaggle notebook | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6669/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6669/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6668/comments | https://api.github.com/repos/huggingface/datasets/issues/6668/events | https://github.com/huggingface/datasets/issues/6668 | 2,137,859,935 | I_kwDODunzps5_bSdf | 6,668 | Chapter 6 - Issue Loading `cnn_dailymail` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/34660389?v=4",
"events_url": "https://api.github.com/users/hariravichandran/events{/privacy}",
"followers_url": "https://api.github.com/users/hariravichandran/followers",
"following_url": "https://api.github.com/users/hariravichandran/following{/other_user}",
"gists_url": "https://api.github.com/users/hariravichandran/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hariravichandran",
"id": 34660389,
"login": "hariravichandran",
"node_id": "MDQ6VXNlcjM0NjYwMzg5",
"organizations_url": "https://api.github.com/users/hariravichandran/orgs",
"received_events_url": "https://api.github.com/users/hariravichandran/received_events",
"repos_url": "https://api.github.com/users/hariravichandran/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hariravichandran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hariravichandran/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hariravichandran",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,708 | null | NONE | null | ### Describe the bug
So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code:
`dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")`
Error Message:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 4
1 #hide_output
2 from datasets import load_dataset
----> 4 dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")
7 # dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0", trust_remote_code=True)
8 print(f"Features: {dataset['train'].column_names}")
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2583 # Build dataset for splits
2584 keep_in_memory = (
2585 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2586 )
-> 2587 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
2588 # Rename and cast features to match task schema
2589 if task is not None:
2590 # To avoid issuing the same warning twice
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1244, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1241 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS)
1243 # Create a dataset for each of the given splits
-> 1244 datasets = map_nested(
1245 partial(
1246 self._build_single_dataset,
1247 run_post_process=run_post_process,
1248 verification_mode=verification_mode,
1249 in_memory=in_memory,
1250 ),
1251 split,
1252 map_tuple=True,
1253 disable_tqdm=True,
1254 )
1255 if isinstance(datasets, dict):
1256 datasets = DatasetDict(datasets)
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:477, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)
466 mapped = [
467 map_nested(
468 function=function,
(...)
474 for obj in iterable
475 ]
476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:
--> 477 mapped = [
478 _single_map_nested((function, obj, types, None, True, None))
479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
480 ]
481 else:
482 with warnings.catch_warnings():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:478, in <listcomp>(.0)
466 mapped = [
467 map_nested(
468 function=function,
(...)
474 for obj in iterable
475 ]
476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:
477 mapped = [
--> 478 _single_map_nested((function, obj, types, None, True, None))
479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
480 ]
481 else:
482 with warnings.catch_warnings():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:370, in _single_map_nested(args)
368 # Singleton first to spare some computation
369 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 370 return function(data_struct)
372 # Reduce logging to keep things readable in multiprocessing with tqdm
373 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1274, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory)
1271 split = Split(split)
1273 # Build base dataset
-> 1274 ds = self._as_dataset(
1275 split=split,
1276 in_memory=in_memory,
1277 )
1278 if run_post_process:
1279 for resource_file_name in self._post_processing_resources(split).values():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1348, in DatasetBuilder._as_dataset(self, split, in_memory)
1346 if self._check_legacy_cache():
1347 dataset_name = self.name
-> 1348 dataset_kwargs = ArrowReader(cache_dir, self.info).read(
1349 name=dataset_name,
1350 instructions=split,
1351 split_infos=self.info.splits.values(),
1352 in_memory=in_memory,
1353 )
1354 fingerprint = self._get_dataset_fingerprint(split)
1355 return Dataset(fingerprint=fingerprint, **dataset_kwargs)
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\arrow_reader.py:254, in BaseReader.read(self, name, instructions, split_infos, in_memory)
252 if not files:
253 msg = f'Instruction "{instructions}" corresponds to no data!'
--> 254 raise ValueError(msg)
255 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
**ValueError: Instruction "validation" corresponds to no data!**
````
Looks like the data is not being loaded. Any advice would be appreciated. Thanks!
### Steps to reproduce the bug
Run all cells of Chapter 6 notebook.
### Expected behavior
Data should load correctly without any errors.
### Environment info
- `datasets` version: 2.17.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.18
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6668/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6668/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6667/comments | https://api.github.com/repos/huggingface/datasets/issues/6667/events | https://github.com/huggingface/datasets/issues/6667 | 2,137,769,552 | I_kwDODunzps5_a8ZQ | 6,667 | Default config for squad is incorrect | {
"avatar_url": "https://avatars.githubusercontent.com/u/22651617?v=4",
"events_url": "https://api.github.com/users/kiddyboots216/events{/privacy}",
"followers_url": "https://api.github.com/users/kiddyboots216/followers",
"following_url": "https://api.github.com/users/kiddyboots216/following{/other_user}",
"gists_url": "https://api.github.com/users/kiddyboots216/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kiddyboots216",
"id": 22651617,
"login": "kiddyboots216",
"node_id": "MDQ6VXNlcjIyNjUxNjE3",
"organizations_url": "https://api.github.com/users/kiddyboots216/orgs",
"received_events_url": "https://api.github.com/users/kiddyboots216/received_events",
"repos_url": "https://api.github.com/users/kiddyboots216/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kiddyboots216/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiddyboots216/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kiddyboots216",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"you can try: pip install datasets==2.16.1"
] | 1970-01-01T00:00:00.000001 | 1,708 | null | NONE | null | ### Describe the bug
If you download Squad, it will download the plain_text version, but the config still specifies "default", so if you set the offline mode the cache will try to look it up according to the config_id which is "default" and this will say;
ValueError: Couldn't find cache for squad for config 'default'
Available configs in the cache: ['plain_text']
### Steps to reproduce the bug
1. export HF_DATASETS_OFFLINE=0
2. load_dataset("squad")
3. export HF_DATASETS_OFFLINE=1
4. load_dataset("squad")
### Expected behavior
We should change the config_name I guess?
### Environment info
linux, latest version of datasets | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6667/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6667/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6663/comments | https://api.github.com/repos/huggingface/datasets/issues/6663/events | https://github.com/huggingface/datasets/issues/6663 | 2,135,480,811 | I_kwDODunzps5_SNnr | 6,663 | `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.",
"> Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.\r\n\r\nI feel that'd be good, but it'd be great to release a hotfix ASAP (a revert is a fast thing to do) so people can continue using this library and then focus on still applying the improvement.",
"Fixed by #6664 "
] | 1970-01-01T00:00:00.000001 | 1,708 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well.
### Steps to reproduce the bug
Try to do `write_batch` with anything that has many columns, and it's likely to break.
### Expected behavior
I expect these functions to work, instead of it trying to cast a column to its incorrect type.
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-5.15.0-1040-aws-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6663/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6663/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6661/comments | https://api.github.com/repos/huggingface/datasets/issues/6661/events | https://github.com/huggingface/datasets/issues/6661 | 2,132,296,267 | I_kwDODunzps5_GEJL | 6,661 | Import error on Google Colab | {
"avatar_url": "https://avatars.githubusercontent.com/u/16103566?v=4",
"events_url": "https://api.github.com/users/kithogue/events{/privacy}",
"followers_url": "https://api.github.com/users/kithogue/followers",
"following_url": "https://api.github.com/users/kithogue/following{/other_user}",
"gists_url": "https://api.github.com/users/kithogue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kithogue",
"id": 16103566,
"login": "kithogue",
"node_id": "MDQ6VXNlcjE2MTAzNTY2",
"organizations_url": "https://api.github.com/users/kithogue/orgs",
"received_events_url": "https://api.github.com/users/kithogue/received_events",
"repos_url": "https://api.github.com/users/kithogue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kithogue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kithogue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kithogue",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! This can happen if an incompatible `pyarrow` version (`pyarrow<12.0.0`) has been imported before the `datasets` installation and the Colab session hasn't been restarted afterward. To avoid the error, go to \"Runtime -> Restart session\" after `!pip install -U datasets` and before `import datasets`, or insert the `import os; os.kill(os.getpid(), 9)` cell between `!pip install -U datasets` and `import datasets` to do the same programmatically.",
"One possible cause might be the one pointed out by @mariosasko above, and you get the following warning on Colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\n\r\nOn the other hand, if the old version of `pyarrow` is not previously imported (before the installation of `datasets`), the reported issue here is not reproducible: `datasets` can be installed, imported and used on Colab.",
"Duplicate of:\r\n- #5923",
"Google Colab now pre-installs PyArrow 14.0.2, making this issue unlikely to happen. So, I'm unpinning it."
] | 1970-01-01T00:00:00.000001 | 1,708 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Cannot be imported on Google Colab, the import throws the following error:
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
1. `! pip install -U datasets`
2. `import datasets`
### Expected behavior
Should be possible to use the library
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6661/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6661/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6657/comments | https://api.github.com/repos/huggingface/datasets/issues/6657/events | https://github.com/huggingface/datasets/issues/6657 | 2,129,147,085 | I_kwDODunzps5-6DTN | 6,657 | Release not pushed to conda channel | {
"avatar_url": "https://avatars.githubusercontent.com/u/7138162?v=4",
"events_url": "https://api.github.com/users/atulsaurav/events{/privacy}",
"followers_url": "https://api.github.com/users/atulsaurav/followers",
"following_url": "https://api.github.com/users/atulsaurav/following{/other_user}",
"gists_url": "https://api.github.com/users/atulsaurav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/atulsaurav",
"id": 7138162,
"login": "atulsaurav",
"node_id": "MDQ6VXNlcjcxMzgxNjI=",
"organizations_url": "https://api.github.com/users/atulsaurav/orgs",
"received_events_url": "https://api.github.com/users/atulsaurav/received_events",
"repos_url": "https://api.github.com/users/atulsaurav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/atulsaurav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atulsaurav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/atulsaurav",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Thanks for reporting, @atulsaurav.\r\n\r\nWe are investigating the issue. ",
"I can't fix this issue because I do not appear as a team member of the huggingface datasets project: https://anaconda.org/huggingface/datasets\r\n\r\n@lhoestq could you please add `datasets` team members to the corresponding Anaconda project?\r\n\r\nOnce this done, I could recreate and update the Anaconda token, as mentioned above it seems the current one has expired.",
"I think @LysandreJik has access ?",
"FYI it failed for 2.18.0 too: https://github.com/huggingface/datasets/actions/runs/8117132330/job/22188677936",
"We updated the token and I re-ran the conda releases :)"
] | 1970-01-01T00:00:00.000001 | 1,709 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ?

### Steps to reproduce the bug
Please see this actions [link](https://github.com/huggingface/datasets/actions/runs/7842473662)
### Expected behavior
The action runs successfully and the latest release is pushed to HuggingFace conda channel
### Environment info
Not applicable. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6657/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6657/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6656/comments | https://api.github.com/repos/huggingface/datasets/issues/6656/events | https://github.com/huggingface/datasets/issues/6656 | 2,127,338,377 | I_kwDODunzps5-zJuJ | 6,656 | Error when loading a big local json file | {
"avatar_url": "https://avatars.githubusercontent.com/u/10062216?v=4",
"events_url": "https://api.github.com/users/Riccorl/events{/privacy}",
"followers_url": "https://api.github.com/users/Riccorl/followers",
"following_url": "https://api.github.com/users/Riccorl/following{/other_user}",
"gists_url": "https://api.github.com/users/Riccorl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Riccorl",
"id": 10062216,
"login": "Riccorl",
"node_id": "MDQ6VXNlcjEwMDYyMjE2",
"organizations_url": "https://api.github.com/users/Riccorl/orgs",
"received_events_url": "https://api.github.com/users/Riccorl/received_events",
"repos_url": "https://api.github.com/users/Riccorl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Riccorl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Riccorl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Riccorl",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I get similar when dealing with a large jsonl file (6k lines), \r\n\r\n> TypeError: Couldn't cast array of type timestamp[us] to null\r\n\r\nYet when I split it into 1k lines, files, load_dataset works fine!\r\n\r\nhttps://github.com/huggingface/course/issues/692\r\n\r\n"
] | 1970-01-01T00:00:00.000001 | 1,710 | null | NONE | null | ### Describe the bug
When trying to load big json files from a local directory, `load_dataset` throws the following error
```
Traceback (most recent call last):
File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single
writer.write_table(table)
File "miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 573, in write_table
pa_table = pa_table.combine_chunks()
File "pyarrow/table.pxi", line 3638, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
```
### Steps to reproduce the bug
1. Download a big file, e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-train.json.gz`
2. Load it like `data = load_dataset("json", data_files=["nq-train.json"], split="train")`
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-train.json"], split="train")
```
A similarly formatted but smaller file, e.g. e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-dev.json.gz` is loaded without issues
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-dev.json"], split="train")
```
### Expected behavior
It should load normally
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6656/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6655/comments | https://api.github.com/repos/huggingface/datasets/issues/6655/events | https://github.com/huggingface/datasets/issues/6655 | 2,127,020,042 | I_kwDODunzps5-x8AK | 6,655 | Cannot load the dataset go_emotions | {
"avatar_url": "https://avatars.githubusercontent.com/u/688324?v=4",
"events_url": "https://api.github.com/users/arame/events{/privacy}",
"followers_url": "https://api.github.com/users/arame/followers",
"following_url": "https://api.github.com/users/arame/following{/other_user}",
"gists_url": "https://api.github.com/users/arame/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arame",
"id": 688324,
"login": "arame",
"node_id": "MDQ6VXNlcjY4ODMyNA==",
"organizations_url": "https://api.github.com/users/arame/orgs",
"received_events_url": "https://api.github.com/users/arame/received_events",
"repos_url": "https://api.github.com/users/arame/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arame/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arame/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arame",
"user_view_type": "public"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wondering: does it make sense to use `transformers` in this case, even if we don't need it to load the `go_emotions` dataset (already converted to Parquet files)?\r\n- Maybe @mariosasko can give some insight, as he included these code lines:\r\n - #6454\r\n\r\nhttps://github.com/huggingface/datasets/blob/9751fb14594d354e952f0ebdfaf31cb203b011e7/src/datasets/utils/_dill.py#L60-L63\r\n",
"The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n\r\nHowever, the logic does not account for `transformers<3`, so we should add a version check to fix that.",
"> The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n> \r\n> However, the logic does not account for `transformers<3`, so we should add a version check to fix that.\r\n\r\nThank you for that Mario. Would this fix solve the problem and do you have any idea when it will be done? \r\nI tried the pip install suggested by Albert and it made no difference.",
"I tried running the code today and the problem appears to be fixed."
] | 1970-01-01T00:00:00.000001 | 1,707 | null | NONE | null | ### Describe the bug
When I run the following code I get an exception;
`go_emotions = load_dataset("go_emotions")`
> AttributeError Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&line=1) go_emotions = load_dataset("go_emotions")
[2](vscode-notebook-cell:?execution_count=6&line=2) data = go_emotions.data
File [c:\Users\hijik\anaconda3\Lib\site-packages\datasets\load.py:2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
[2518](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2518) verification_mode = VerificationMode(
[2519](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2519) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
[2520](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2520) )
[2522](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2522) # Create a dataset builder
-> [2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523) builder_instance = load_dataset_builder(
[2524](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2524) path=path,
[2525](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2525) name=name,
[2526](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2526) data_dir=data_dir,
[2527](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2527) data_files=data_files,
[2528](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2528) cache_dir=cache_dir,
[2529](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2529) features=features,
[2530](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2530) download_config=download_config,
[2531](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2531) download_mode=download_mode,
[2532](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2532) revision=revision,
[2533](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2533) token=token,
[2534](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2534) storage_options=storage_options,
[2535](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2535) trust_remote_code=trust_remote_code,
[2536](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2536) _require_default_config_name=name is None,
...
---> [63](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:63) if issubclass(obj_type, transformers.PreTrainedTokenizerBase):
[64](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:64) pklregister(obj_type)(_save_transformersPreTrainedTokenizerBase)
[66](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:66) # Unwrap `torch.compile`-ed functions
AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?10bc0728-6947-456e-9a3e-f056872b04c6) or open in a [text editor](command:workbench.action.openLargeOutput?10bc0728-6947-456e-9a3e-f056872b04c6). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Steps to reproduce the bug
```
from datasets import load_dataset
go_emotions = load_dataset("go_emotions")
```
### Expected behavior
Should simply load the variable with the data from the file
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.16.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.3
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6655/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6655/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6654/comments | https://api.github.com/repos/huggingface/datasets/issues/6654/events | https://github.com/huggingface/datasets/issues/6654 | 2,126,939,358 | I_kwDODunzps5-xoTe | 6,654 | Batched dataset map throws exception that cannot cast fixed length array to Sequence | {
"avatar_url": "https://avatars.githubusercontent.com/u/1029671?v=4",
"events_url": "https://api.github.com/users/keesjandevries/events{/privacy}",
"followers_url": "https://api.github.com/users/keesjandevries/followers",
"following_url": "https://api.github.com/users/keesjandevries/following{/other_user}",
"gists_url": "https://api.github.com/users/keesjandevries/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/keesjandevries",
"id": 1029671,
"login": "keesjandevries",
"node_id": "MDQ6VXNlcjEwMjk2NzE=",
"organizations_url": "https://api.github.com/users/keesjandevries/orgs",
"received_events_url": "https://api.github.com/users/keesjandevries/received_events",
"repos_url": "https://api.github.com/users/keesjandevries/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/keesjandevries/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keesjandevries/subscriptions",
"type": "User",
"url": "https://api.github.com/users/keesjandevries",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! This issue has been fixed by https://github.com/huggingface/datasets/pull/6283\r\n\r\nCan you try again with the new release 2.17.0 ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\n",
"Amazing! It's indeed fixed now. Thanks!"
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 2093, failing to correctly process sequence lengths.
### Steps to reproduce the bug
Create virtual environment and activate
```
virtualenv venv
source venv/bin/activate
```
Then install the datasets package (I'm using the latest version)
```
pip install datasets==2.16.1
```
Then run
```python
# bug.py
from datasets import Dataset
from datasets.features import Features, Sequence, Value
data = {
"num": [[1, 2], [3, 4]],
}
features = Features({'num': Sequence(feature=Value(dtype='int32'), length=2)})
dataset = Dataset.from_dict(data, features=features)
dataset.map(lambda x: x, batched=True, batch_size=1)
```
### Expected behavior
I get the following stack trace
```
Map: 50%|█████ | 1/2 [00:00<00:00, 423.92 examples/s]
Traceback (most recent call last):
File "/PATH/TO/BUG_PORT/bug.py", line 9, in <module>
dataset.map(lambda x: x, batched=True, batch_size=1)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single
writer.write_batch(batch)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 551, in write_batch
array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[2]
to
Sequence(feature=Value(dtype='int32', id=None), length=2, id=None)
```
After some debugging, I found that the if-statement that is actually failing is line 2093 in `datasets/table.py`
```python
# datasets/table.py
...
2093 if feature.length * len(array) == len(array_values):
2094 return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length)
...
```
### Environment info
Platform: MacOS
Datasets version: datasets==2.16.1
Python version: 3.9.6 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1029671?v=4",
"events_url": "https://api.github.com/users/keesjandevries/events{/privacy}",
"followers_url": "https://api.github.com/users/keesjandevries/followers",
"following_url": "https://api.github.com/users/keesjandevries/following{/other_user}",
"gists_url": "https://api.github.com/users/keesjandevries/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/keesjandevries",
"id": 1029671,
"login": "keesjandevries",
"node_id": "MDQ6VXNlcjEwMjk2NzE=",
"organizations_url": "https://api.github.com/users/keesjandevries/orgs",
"received_events_url": "https://api.github.com/users/keesjandevries/received_events",
"repos_url": "https://api.github.com/users/keesjandevries/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/keesjandevries/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keesjandevries/subscriptions",
"type": "User",
"url": "https://api.github.com/users/keesjandevries",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6654/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6654/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6651/comments | https://api.github.com/repos/huggingface/datasets/issues/6651/events | https://github.com/huggingface/datasets/issues/6651 | 2,126,649,626 | I_kwDODunzps5-whka | 6,651 | Slice splits support for datasets.load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/37439882?v=4",
"events_url": "https://api.github.com/users/mhorlacher/events{/privacy}",
"followers_url": "https://api.github.com/users/mhorlacher/followers",
"following_url": "https://api.github.com/users/mhorlacher/following{/other_user}",
"gists_url": "https://api.github.com/users/mhorlacher/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mhorlacher",
"id": 37439882,
"login": "mhorlacher",
"node_id": "MDQ6VXNlcjM3NDM5ODgy",
"organizations_url": "https://api.github.com/users/mhorlacher/orgs",
"received_events_url": "https://api.github.com/users/mhorlacher/received_events",
"repos_url": "https://api.github.com/users/mhorlacher/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mhorlacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhorlacher/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mhorlacher",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,718 | null | NONE | null | ### Feature request
Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`.
### Motivation
Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogenize the APIs of load_from_disk and load_dataset.
### Your contribution
Sure, if the devs think the feature request is sensible. | null | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6651/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6650/comments | https://api.github.com/repos/huggingface/datasets/issues/6650/events | https://github.com/huggingface/datasets/issues/6650 | 2,125,680,991 | I_kwDODunzps5-s1Ff | 6,650 | AttributeError: 'InMemoryTable' object has no attribute '_batches' | {
"avatar_url": "https://avatars.githubusercontent.com/u/13874772?v=4",
"events_url": "https://api.github.com/users/matsuobasho/events{/privacy}",
"followers_url": "https://api.github.com/users/matsuobasho/followers",
"following_url": "https://api.github.com/users/matsuobasho/following{/other_user}",
"gists_url": "https://api.github.com/users/matsuobasho/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/matsuobasho",
"id": 13874772,
"login": "matsuobasho",
"node_id": "MDQ6VXNlcjEzODc0Nzcy",
"organizations_url": "https://api.github.com/users/matsuobasho/orgs",
"received_events_url": "https://api.github.com/users/matsuobasho/received_events",
"repos_url": "https://api.github.com/users/matsuobasho/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/matsuobasho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matsuobasho/subscriptions",
"type": "User",
"url": "https://api.github.com/users/matsuobasho",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi! Does running the following code also return the same error on your machine? \r\n\r\n```python\r\nimport copy\r\nimport pyarrow as pa\r\nfrom datasets.table import InMemoryTable\r\n\r\ncopy.deepcopy(InMemoryTable(pa.table({\"a\": [1, 2, 3], \"b\": [\"foo\", \"bar\", \"foobar\"]})))\r\n```",
"No, it doesn't, it runs fine. But what's really strange is that the error just went away after I reran the data prep script for conversion from csv to a datasets object. I realize that's not very helpful since the problem isn't reproducible. ",
"Feel free to close the issue then :)."
] | 1970-01-01T00:00:00.000001 | 1,708 | null | NONE | null | ### Describe the bug
```
Traceback (most recent call last):
File "finetune.py", line 103, in <module>
main(args)
File "finetune.py", line 45, in main
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 868, in map
{
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 869, in <dictcomp>
k: dataset.map(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3432, in _map_single
arrow_formatted_shard = shard.with_format("arrow")
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2667, in with_format
dataset = copy.deepcopy(self)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 153, in deepcopy
y = copier(memo)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/table.py", line 176, in __deepcopy__
memo[id(self._batches)] = list(self._batches)
AttributeError: 'InMemoryTable' object has no attribute '_batches'
```
### Steps to reproduce the bug
I'm running an MLOps flow using AzureML.
The error appears when I run the following function in my training script:
```python
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
seq_length),
batched=True,
batch_size=batch_size,
remove_columns=['col1', 'col2'])
```
```python
def tokenize_function(tok, seq_length, example)
# Pad so that each batch has the same sequence length
inp = tok(example['col1'], padding=True, truncation=True)
outp = tok(example['col2'], padding="max_length", max_length=seq_length)
res = {
'input_ids': inp['input_ids'],
'attention_mask': inp['attention_mask'],
'decoder_input_ids': outp['input_ids'],
'labels': outp['input_ids'],
'decoder_attention_mask': outp['attention_mask']
}
return res
```
### Expected behavior
Processing proceeds without errors. I ran this same workflow 2 weeks ago without a problem. I recreated the environment since then but it doesn't appear that datasets versions have changed since Dec. '23.
### Environment info
datasets 2.16.1
transformers 4.35.2
pyarrow 15.0.0
pyarrow-hotfix 0.6
torch 2.0.1
I'm not using the latest transformers version because there was an error due to a conflict with Azure mlflow when I tried the last time. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6650/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6645/comments | https://api.github.com/repos/huggingface/datasets/issues/6645/events | https://github.com/huggingface/datasets/issues/6645 | 2,122,956,818 | I_kwDODunzps5-icAS | 6,645 | Support fsspec 2024.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"I'd be very grateful. This upper bound banished me straight into dependency hell today. :("
] | 1970-01-01T00:00:00.000001 | 1,709 | 1970-01-01T00:00:00.000001 | MEMBER | null | Support fsspec 2024.2.
First, we should address:
- #6644 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6645/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6645/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6644/comments | https://api.github.com/repos/huggingface/datasets/issues/6644/events | https://github.com/huggingface/datasets/issues/6644 | 2,122,955,282 | I_kwDODunzps5-iboS | 6,644 | Support fsspec 2023.12 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"The pinned fsspec version range dependency conflict has been affecting several of our users in https://github.com/iterative/dvc. I've opened an initial PR that I think should resolve the glob behavior changes with using datasets + the latest fsspec release.\r\n\r\nPlease let us know if there's any other fsspec related behavior in datasets that needs to be updated to get 2024.2 supported, we'd like to get this conflict resolved as quickly as possible and we're willing to contribute any additional work that's required here.\r\n\r\ncc @dberenbaum"
] | 1970-01-01T00:00:00.000001 | 1,709 | 1970-01-01T00:00:00.000001 | MEMBER | null | Support fsspec 2023.12 by handling previous and new glob behavior. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6644/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6644/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6643/comments | https://api.github.com/repos/huggingface/datasets/issues/6643/events | https://github.com/huggingface/datasets/issues/6643 | 2,121,239,039 | I_kwDODunzps5-b4n_ | 6,643 | Faiss GPU index cannot be serialised when passed to trainer | {
"avatar_url": "https://avatars.githubusercontent.com/u/56388976?v=4",
"events_url": "https://api.github.com/users/rubenweitzman/events{/privacy}",
"followers_url": "https://api.github.com/users/rubenweitzman/followers",
"following_url": "https://api.github.com/users/rubenweitzman/following{/other_user}",
"gists_url": "https://api.github.com/users/rubenweitzman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rubenweitzman",
"id": 56388976,
"login": "rubenweitzman",
"node_id": "MDQ6VXNlcjU2Mzg4OTc2",
"organizations_url": "https://api.github.com/users/rubenweitzman/orgs",
"received_events_url": "https://api.github.com/users/rubenweitzman/received_events",
"repos_url": "https://api.github.com/users/rubenweitzman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rubenweitzman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rubenweitzman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rubenweitzman",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! make sure your query embeddings are numpy arrays, not torch tensors ;)",
"Hi Quentin, not sure how that solves the problem number 1. I am trying to pass on a dataset with a faiss gpu for training to the standard trainer but getting this serialisation error. What is a workaround this? I do not want to remove the faiss index, as I would want to use it to create batches of retrieved samples from the dataset. \r\nThanks in advance for your help!",
"Issue number one seems to be an issue with FAISS indexes not being compatible with copy.deepcopy.\r\n\r\nMaybe you try to not remove the columns, e.g. by passing `remove_unused_columns=False`"
] | 1970-01-01T00:00:00.000001 | 1,707 | null | NONE | null | ### Describe the bug
I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration:
1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error:
```
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in _inner_training_loop
train_dataloader = self.get_train_dataloader()
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 831, in get_train_dataloader
train_dataset = self._remove_unused_columns(train_dataset, description="training")
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 725, in _remove_unused_columns
return dataset.remove_columns(ignored_columns)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/fingerprint.py", line 481, in wrapper
out = func(dataset, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2146, in remove_columns
dataset = copy.deepcopy(self)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 161, in deepcopy
rv = reductor(4)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 556, in index_getstate
return {"this": serialize_index(self).tobytes()}
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 1607, in serialize_index
write_index(index, writer)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/swigfaiss.py", line 9843, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /project/faiss/faiss/impl/index_write.cpp:590: don't know how to serialize this type of index
```
The index was created with the add_faiss_index method
```
train_dataset.add_faiss_index(
column='embeddings',
index_name='embeddings',
string_factory=faiss_index_string,
train_size=config.faiss_train_size,
device=0, # Use -1 for CPU, or specify GPU device ID
faiss_verbose=True
)
```
2. Athough faiss is written to be compatible on the gpu for searching [https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU) I am getting error when trying to use the hugggingface code to do the search on gpu. This seems to be caused by this line https://github.com/huggingface/datasets/blob/f9975f636542df7f95c27065ea93147440d690b7/src/datasets/search.py#L376 producing error
```
total_scores, total_examples = self.dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 773, in get_nearest_examples_batch
total_scores, total_indices = self.search_batch(index_name, queries, k, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 727, in search_batch
return self._indexes[index_name].search_batch(queries, k, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 376, in search_batch
if not queries.flags.c_contiguous:
AttributeError: 'Tensor' object has no attribute 'flags'
```
### Steps to reproduce the bug
```
train_dataset.add_faiss_index(
column='embeddings',
index_name='embeddings',
string_factory=faiss_index_string,
train_size=config.faiss_train_size,
device=0, # Use -1 for CPU, or specify GPU device ID
faiss_verbose=True
)
Trainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=data_collator,
tokenizer=tokenizer
)
train_dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k)
```
### Expected behavior
I would expect the faiss database code to be gpu compatible
### Environment info
huggingface Version: 2.16.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6643/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6643/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6642/comments | https://api.github.com/repos/huggingface/datasets/issues/6642/events | https://github.com/huggingface/datasets/issues/6642 | 2,119,085,766 | I_kwDODunzps5-Tq7G | 6,642 | Differently dataset object saved than it is loaded. | {
"avatar_url": "https://avatars.githubusercontent.com/u/31218150?v=4",
"events_url": "https://api.github.com/users/MFajcik/events{/privacy}",
"followers_url": "https://api.github.com/users/MFajcik/followers",
"following_url": "https://api.github.com/users/MFajcik/following{/other_user}",
"gists_url": "https://api.github.com/users/MFajcik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFajcik",
"id": 31218150,
"login": "MFajcik",
"node_id": "MDQ6VXNlcjMxMjE4MTUw",
"organizations_url": "https://api.github.com/users/MFajcik/orgs",
"received_events_url": "https://api.github.com/users/MFajcik/received_events",
"repos_url": "https://api.github.com/users/MFajcik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFajcik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFajcik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFajcik",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I see now, that I have to use `load_from_disk`, in order to load dataset properly, not `load_dataset`. Why is this behavior split? Why do we need both, `load_dataset` and `load_from_disk`?\r\n\r\nUnless answered, I believe this might be helpful for other hf datasets newbies.\r\n\r\nAnyway, made a `load_dataset` compatible dataset in a following way. I created a directory, and just copied jsonl there as `train.jsonl/test.jsonl`.\r\n```python\r\noutput_folder = os.path.join(args.output_folder, f\"{task_meta_type}_{task_type}\")\r\nos.makedirs(output_folder, exist_ok=True)\r\nfile = f\"{task_meta_type}_{task_type}_train.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"train.jsonl\"))\r\n# now test\r\nfile = f\"{task_meta_type}_{task_type}_test.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"test.jsonl\"))\r\n```\r\n",
"Hi @MFajcik, \r\n\r\nYou can find information about save_to_disk/load_from_disk in our docs:\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/process#save\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.save_to_disk\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.load_from_disk"
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Differently sized object is saved than it is loaded.
### Steps to reproduce the bug
Hi, I save dataset in a following way:
```
dataset = load_dataset("json",
data_files={
"train": os.path.join(input_folder, f"{task_meta_type}_{task_type}_train.jsonl"),
"test": os.path.join(input_folder, f"{task_meta_type}_{task_type}_test.jsonl")})
print(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
dataset.save_to_disk(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
```
this yields output
```
.data/hf_dataset/propaganda_zanr
Length of train dataset: 7642
Length of test dataset: 1000
```
Everything looks fine.
Then I load the dataset
```python
from datasets import load_dataset
dataset_path = ".data/hf_dataset/propaganda_zanr"
dataset = load_dataset(dataset_path)
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
```
this prints
```
Generating train split: 1 examples [00:00, 72.10 examples/s]
Generating test split: 1 examples [00:00, 100.69 examples/s]
Length of train dataset: 1
Length of test dataset: 1
```
I dont' understand :(
### Expected behavior
same object is loaded
### Environment info
datasets==2.16.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6642/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6641/comments | https://api.github.com/repos/huggingface/datasets/issues/6641/events | https://github.com/huggingface/datasets/issues/6641 | 2,116,963,132 | I_kwDODunzps5-Lks8 | 6,641 | unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte | {
"avatar_url": "https://avatars.githubusercontent.com/u/109789057?v=4",
"events_url": "https://api.github.com/users/Hughhuh/events{/privacy}",
"followers_url": "https://api.github.com/users/Hughhuh/followers",
"following_url": "https://api.github.com/users/Hughhuh/following{/other_user}",
"gists_url": "https://api.github.com/users/Hughhuh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hughhuh",
"id": 109789057,
"login": "Hughhuh",
"node_id": "U_kgDOBos_gQ",
"organizations_url": "https://api.github.com/users/Hughhuh/orgs",
"received_events_url": "https://api.github.com/users/Hughhuh/received_events",
"repos_url": "https://api.github.com/users/Hughhuh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hughhuh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hughhuh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hughhuh",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Hi @Hughhuh. \r\n\r\nI have formatted the issue because it was not easily readable. Additionally, the environment info is incomplete: it seems you did not run the proposed CLI command `datasets-cli env` and essential information is missing: version of `datasets`, version of `pyarrow`,...\r\n\r\nWith the information you provided, it seems an issue with the specific \"samsum\" dataset. I'm transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/samsum/discussions/5"
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
### Steps to reproduce the bug
```
import sys
sys.getdefaultencoding()
'utf-8'
from datasets import load_dataset
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
Resolving data files: 100%
159/159 [00:00<00:00, 9909.28it/s]
Using custom data configuration samsum-0b1209637541c9e6
Downloading and preparing dataset json/samsum to C:/Users/Administrator/.cache/huggingface/datasets/json/samsum-0b1209637541c9e6/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%
3/3 [00:00<00:00, 119.99it/s]
Extracting data files: 100%
3/3 [00:00<00:00, 9.54it/s]
Generating train split:
88392/0 [00:15<00:00, 86848.17 examples/s]
Generating test split:
0/0 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:132, in Json._generate_tables(self, files)
131 try:
--> 132 pa_table = paj.read_json(
133 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
134 )
135 break
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\_json.pyx:290, in pyarrow._json.read_json()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: JSON parse error: Invalid value. in row 0
During handling of the above exception, another exception occurred:
UnicodeDecodeError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1819, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1818 _time = time.time()
-> 1819 for _, table in generator:
1820 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:153, in Json._generate_tables(self, files)
152 with open(file, encoding="utf-8") as f:
--> 153 dataset = json.load(f)
154 except json.JSONDecodeError:
File ~\AppData\Local\Programs\Python\Python310\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
277 a JSON document) to a Python object.
278
(...)
291 kwarg; otherwise ``JSONDecoder`` is used.
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
295 parse_float=parse_float, parse_int=parse_int,
296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File ~\AppData\Local\Programs\Python\Python310\lib\codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[81], line 5
1 from datasets import load_dataset
3 # Load dataset from the hub
4 #dataset = load_dataset("json",data_files="C:/Users/Administrator/Desktop/samsum/samsum/data/corpus/train.json",field="data")
----> 5 dataset = load_dataset('json',"samsum")
6 #dataset = load_dataset("samsum")
7 print(f"Train dataset size: {len(dataset['train'])}")
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py:1758, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1755 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1757 # Download and prepare data
-> 1758 builder_instance.download_and_prepare(
1759 download_config=download_config,
1760 download_mode=download_mode,
1761 ignore_verifications=ignore_verifications,
1762 try_from_hf_gcs=try_from_hf_gcs,
1763 num_proc=num_proc,
1764 )
1766 # Build dataset for splits
1767 keep_in_memory = (
1768 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1769 )
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:860, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
858 if num_proc is not None:
859 prepare_split_kwargs["num_proc"] = num_proc
--> 860 self._download_and_prepare(
861 dl_manager=dl_manager,
862 verify_infos=verify_infos,
863 **prepare_split_kwargs,
864 **download_and_prepare_kwargs,
865 )
866 # Sync info
867 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:953, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
949 split_dict.add(split_generator.split_info)
951 try:
952 # Prepare split will record examples associated to the split
--> 953 self._prepare_split(split_generator, **prepare_split_kwargs)
954 except OSError as e:
955 raise OSError(
956 "Cannot find data file. "
957 + (self.manual_download_instructions or "")
958 + "\nOriginal error:\n"
959 + str(e)
960 ) from None
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1708, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1706 gen_kwargs = split_generator.gen_kwargs
1707 job_id = 0
-> 1708 for job_id, done, content in self._prepare_split_single(
1709 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1710 ):
1711 if done:
1712 result = content
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1851, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1849 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1850 e = e.__context__
-> 1851 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1853 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
can't load dataset
### Environment info
dataset:samsum
system :win10
gpu:m40 24G | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6641/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6641/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6640/comments | https://api.github.com/repos/huggingface/datasets/issues/6640/events | https://github.com/huggingface/datasets/issues/6640 | 2,115,864,531 | I_kwDODunzps5-HYfT | 6,640 | Sign Language Support | {
"avatar_url": "https://avatars.githubusercontent.com/u/6684795?v=4",
"events_url": "https://api.github.com/users/Merterm/events{/privacy}",
"followers_url": "https://api.github.com/users/Merterm/followers",
"following_url": "https://api.github.com/users/Merterm/following{/other_user}",
"gists_url": "https://api.github.com/users/Merterm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Merterm",
"id": 6684795,
"login": "Merterm",
"node_id": "MDQ6VXNlcjY2ODQ3OTU=",
"organizations_url": "https://api.github.com/users/Merterm/orgs",
"received_events_url": "https://api.github.com/users/Merterm/received_events",
"repos_url": "https://api.github.com/users/Merterm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Merterm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Merterm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Merterm",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,706 | null | NONE | null | ### Feature request
Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html
### Motivation
Datasets currently only have labels for several signed languages. There are more signed languages in the world. Furthermore, some signed languages that have a lot of online data cannot be found because of this reason (for instance, German Sign Language, and there is no German Sign Language label on huggingface datasets even though there are a lot of readily available sign language datasets exist for German Sign Language, which are used very frequently in Sign Language Processing papers, and models.)
### Your contribution
I can submit a PR for this as well, adding the ISO codes and languages to the labels in datasets. | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6640/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6640/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6638/comments | https://api.github.com/repos/huggingface/datasets/issues/6638/events | https://github.com/huggingface/datasets/issues/6638 | 2,113,329,257 | I_kwDODunzps599thp | 6,638 | Cannot download wmt16 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/81709031?v=4",
"events_url": "https://api.github.com/users/vidyasiv/events{/privacy}",
"followers_url": "https://api.github.com/users/vidyasiv/followers",
"following_url": "https://api.github.com/users/vidyasiv/following{/other_user}",
"gists_url": "https://api.github.com/users/vidyasiv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vidyasiv",
"id": 81709031,
"login": "vidyasiv",
"node_id": "MDQ6VXNlcjgxNzA5MDMx",
"organizations_url": "https://api.github.com/users/vidyasiv/orgs",
"received_events_url": "https://api.github.com/users/vidyasiv/received_events",
"repos_url": "https://api.github.com/users/vidyasiv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vidyasiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vidyasiv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vidyasiv",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Looks like it works with latest datasets repository\r\n```\r\n- `datasets` version: 2.16.2.dev0\r\n- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.0.1\r\n- `fsspec` version: 2023.10.0\r\n```\r\n\r\nCould you explain which is the minimum version that fixes this?\r\nEdit: Looks like that's 2.16.0, will close out issue"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "test.py", line 2, in <module>
raw_datasets = load_dataset("wmt16","ro-en",split="train")
File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 2153, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1717, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1027, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227/wmt_utils.py", line 754, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 565, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 428, in download
downloaded_path_or_paths = map_nested(
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 464, in map_nested
mapped = [
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 465, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 367, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 454, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 182, in cached_path
output_path = get_from_cache(
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 596, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
```
### Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("wmt16","ro-en",split="train")
```
### Expected behavior
Expect the dataset to be downloaded/ at least a clean exit with error explaining dataset is missing and a suggestion for next steps
### Environment info
- `datasets` version: 2.14.7
- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.17.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.1
| {
"avatar_url": "https://avatars.githubusercontent.com/u/81709031?v=4",
"events_url": "https://api.github.com/users/vidyasiv/events{/privacy}",
"followers_url": "https://api.github.com/users/vidyasiv/followers",
"following_url": "https://api.github.com/users/vidyasiv/following{/other_user}",
"gists_url": "https://api.github.com/users/vidyasiv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vidyasiv",
"id": 81709031,
"login": "vidyasiv",
"node_id": "MDQ6VXNlcjgxNzA5MDMx",
"organizations_url": "https://api.github.com/users/vidyasiv/orgs",
"received_events_url": "https://api.github.com/users/vidyasiv/received_events",
"repos_url": "https://api.github.com/users/vidyasiv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vidyasiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vidyasiv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vidyasiv",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6638/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6637/comments | https://api.github.com/repos/huggingface/datasets/issues/6637/events | https://github.com/huggingface/datasets/issues/6637 | 2,113,025,975 | I_kwDODunzps598je3 | 6,637 | 'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/22883190?v=4",
"events_url": "https://api.github.com/users/tobycrisford/events{/privacy}",
"followers_url": "https://api.github.com/users/tobycrisford/followers",
"following_url": "https://api.github.com/users/tobycrisford/following{/other_user}",
"gists_url": "https://api.github.com/users/tobycrisford/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tobycrisford",
"id": 22883190,
"login": "tobycrisford",
"node_id": "MDQ6VXNlcjIyODgzMTkw",
"organizations_url": "https://api.github.com/users/tobycrisford/orgs",
"received_events_url": "https://api.github.com/users/tobycrisford/received_events",
"repos_url": "https://api.github.com/users/tobycrisford/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tobycrisford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tobycrisford/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tobycrisford",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The \"torch\" formatting is usually fast because we do zero-copy conversion from the Arrow data on your disk to Torch tensors. However IterableDataset shuffling seems to do data copies that slow down the pipeline, and it shuffles python objects instead of Arrow data.\r\n\r\nTo fix this we need to implement `BufferShuffledExamplesIterable.iter_arrow()` (same as regular `BufferShuffledExamplesIterable.__iter__()` but yields Arrow tables)\r\n\r\nhttps://github.com/huggingface/datasets/blob/b7d854b7fd3e9a330e21b76ee8421d4a7ebb4a7a/src/datasets/iterable_dataset.py#L968-L974\r\n"
] | 1970-01-01T00:00:00.000001 | 1,707 | null | NONE | null | ### Describe the bug
If you:
1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset
2. Set the output format to torch tensors with .with_format('torch')
Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch formatting.
### Steps to reproduce the bug
```python
import datasets
import torch
from tqdm import tqdm
rand_a = torch.randn(3,224,224)
rand_b = torch.randn(3,224,224)
a = torch.stack([rand_a] * 1000)
b = torch.stack([rand_b] * 1000)
features = datasets.Features({"tensor": datasets.Array3D(shape=(3,224,224), dtype="float32")})
ds_a = datasets.Dataset.from_dict({"tensor": a}, features=features).to_iterable_dataset()
ds_b = datasets.Dataset.from_dict({"tensor": b}, features=features).to_iterable_dataset()
# Iterating through either dataset with torch formatting is really fast (2000it/s on my machine)
for example in tqdm(ds_a.with_format('torch')):
pass
# Iterating through either dataset shuffled is also pretty fast (100it/s on my machine)
for example in tqdm(ds_a.shuffle()):
pass
# Iterating through this interleaved dataset is pretty fast (200it/s on my machine)
ds_fast = datasets.interleave_datasets([ds_a, ds_b])
for example in tqdm(ds_fast):
pass
# Iterating through either dataset with torch formatting *after shuffling* is really slow... (<2it/s on my machine)
for example in tqdm(ds_a.shuffle().with_format('torch')):
pass
# Iterating through this torch formatted interleaved dataset is also really slow (<2it/s on my machine)...
ds_slow = datasets.interleave_datasets([ds_a, ds_b]).with_format('torch')
for example in tqdm(ds_slow):
pass
# Even doing this is way faster!! (70it/s on my machine)
for example in tqdm(ds_fast):
test = torch.tensor(example['tensor'])
```
### Expected behavior
Applying torch formatting to the interleaved dataset shouldn't increase the time taken to iterate through the dataset by very much, since even explicitly converting every example is over 70x faster than calling .with_format('torch').
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.38
- Python version: 3.11.6
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 3,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6637/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6624/comments | https://api.github.com/repos/huggingface/datasets/issues/6624/events | https://github.com/huggingface/datasets/issues/6624 | 2,103,950,718 | I_kwDODunzps59Z71- | 6,624 | How to download the laion-coco dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15981416?v=4",
"events_url": "https://api.github.com/users/vanpersie32/events{/privacy}",
"followers_url": "https://api.github.com/users/vanpersie32/followers",
"following_url": "https://api.github.com/users/vanpersie32/following{/other_user}",
"gists_url": "https://api.github.com/users/vanpersie32/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vanpersie32",
"id": 15981416,
"login": "vanpersie32",
"node_id": "MDQ6VXNlcjE1OTgxNDE2",
"organizations_url": "https://api.github.com/users/vanpersie32/orgs",
"received_events_url": "https://api.github.com/users/vanpersie32/received_events",
"repos_url": "https://api.github.com/users/vanpersie32/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vanpersie32/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vanpersie32/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vanpersie32",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it."
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | NONE | null | The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6624/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6623/comments | https://api.github.com/repos/huggingface/datasets/issues/6623/events | https://github.com/huggingface/datasets/issues/6623 | 2,103,870,123 | I_kwDODunzps59ZoKr | 6,623 | streaming datasets doesn't work properly with multi-node | {
"avatar_url": "https://avatars.githubusercontent.com/u/30778939?v=4",
"events_url": "https://api.github.com/users/rohitgr7/events{/privacy}",
"followers_url": "https://api.github.com/users/rohitgr7/followers",
"following_url": "https://api.github.com/users/rohitgr7/following{/other_user}",
"gists_url": "https://api.github.com/users/rohitgr7/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rohitgr7",
"id": 30778939,
"login": "rohitgr7",
"node_id": "MDQ6VXNlcjMwNzc4OTM5",
"organizations_url": "https://api.github.com/users/rohitgr7/orgs",
"received_events_url": "https://api.github.com/users/rohitgr7/received_events",
"repos_url": "https://api.github.com/users/rohitgr7/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rohitgr7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohitgr7/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rohitgr7",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"@mariosasko, @lhoestq, @albertvillanova\r\nhey guys! can anyone help? or can you guys suggest who can help with this?",
"Hi ! \r\n\r\n1. When the dataset is running of of examples, the last batches received by the GPU can be incomplete or empty/missing. We haven't implemented yet a way to ignore the last batch. It might require the datasets to provide the number of examples per shard though, so that we can know when to stop.\r\n2. Samplers are not compatible with IterableDatasets in pytorch\r\n3. if `dataset.n_shards % world_size != 0` then all the nodes will read/stream the full dataset in order (possibly reading/streaming the same data multiple times), BUT will only yield one example out of `world_size` so that each example goes to one exactly one GPU.\r\n4. no, sharding should be down up-front and can take some time depending on the dataset size and format",
"> if dataset.n_shards % world_size != 0 then all the nodes will read/stream the full dataset in order (possibly reading/streaming the same data multiple times), BUT will only yield one example out of world_size so that each example goes to one exactly one GPU.\r\n\r\nconsidering there's just 1 shard and 2 worker nodes, do you mean each worker node will load the whole dataset but still receive half of that shard while streaming?",
"Yes both nodes will stream from the 1 shard, but each node will skip half of the examples. This way in total each example is seen once and exactly once during you distributed training.\r\n\r\nThough it terms of I/O, the dataset is effectively read/streamed twice.",
"what if the number of samples in that shard % num_nodes != 0? it will break/get stuck? or is the data repeated in that case for gradient sync?",
"In the case one at least one of the nodes will get an empty/incomplete batch. The data is not repeated in that case. If the training loop doesn't take this into account it can lead to unexpected behaviors indeed.\r\n\r\nIn the future we'd like to add a feature that would allow the nodes to ignore the last batch, this way all the nodes would only have full batches.",
"> In the case one at least one of the noes will get an empty/incomplete batch. The data is not repeated in that case. If the training loop doesn't take this into account it can lead to unexpected behaviors indeed.\r\n> \r\n> In the future we'd like to add a feature that would allow the nodes to ignore the last batch, this way all the nodes would only have full batches.\r\n\r\nIs there any method to modify one dataset's n_shard? modify the number of files is ok? one file == one shard?",
"> modify the number of files is ok? one file == one shard?\r\n\r\nYep, one file == one shard :)",
"Hi @lhoestq, do you have any advice on how to implement a fix for the case dataset.n_shards % world_size != 0 while such a fix is not supported in the library?\r\n\r\nIt seems essential for performing validation in a ddp setting\r\n\r\nSimply limiting the number of files is a bit brittle as it relies on world size being consistent to ensure different runs see the same data\r\n\r\nHow should a user either ignore the last batch or handle the empty batch?\r\n\r\nIs the issue of overhanging batches also relevant for map-style datasets?",
"> How should a user either ignore the last batch or handle the empty batch?\r\n\r\nCheck the batch size in the training loop and use all_reduce (or any communication method) to make sure all the nodes got their data before passing them to the model. If some data are missing you can decide to stop the training loop or repeat examples until all the nodes have exhausted their data.\r\n\r\nCc @andrewkho in case you know a way to make the DataLoader stop or add extra samples automatically in case of distributed + unevenly divisible iterable dataset\r\n\r\n> Is the issue of overhanging batches also relevant for map-style datasets?\r\n\r\nThe DistributedSampler drops the last data by default to make the dataset evenly divisible.",
"@lhoestq Unfortunately for IterableDataset there isn't a way to do this in general without introducing communciation between ranks, or having all the ranks read all the data before starting to figure out when to stop (which is pretty impractical). My recommendation for these situations where you don't know the total number of samples apriori is to, configure the iterable dataset to yield a fixed number of samples before raising StopIteration, and if necessary, repeat/reshuffle samples to hit that number ",
"A heads up that we're planning to land something new in torchdata by end-of-year to help with these scenarios, we'll update this thread when we hvae some code landed ",
"I made a quick example with communication between ranks to stop once all the data from all the ranks are exhausted (and repeating data if necessary to end up with a number of samples evenly divisible)\r\n\r\n```python\r\nimport torch\r\nimport torch.distributed as dist\r\nfrom datasets import Dataset\r\nfrom datasets.distributed import split_dataset_by_node\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\n# simulate a streaming dataset\r\nnum_shards = 1 # change here if you want to simulate a dataset made of many files/shards\r\nds = Dataset.from_dict({\"x\": [1, 2, 3, 4, 5]}).to_iterable_dataset(num_shards=num_shards)\r\n\r\n# split the dataset for distributed training\r\ndist.init_process_group()\r\nrank, world_size = dist.get_rank(), dist.get_world_size()\r\nds = split_dataset_by_node(ds, rank=rank,world_size=world_size)\r\ndl = DataLoader(ds)\r\n\r\nexhausted = torch.zeros(world_size, dtype=torch.bool)\r\n\r\n# IMPORTANT: Loop over the local dataset until the data from each rank has been exhausted\r\n\r\ndef loop():\r\n while True:\r\n yield from dl\r\n yield \"end\"\r\n\r\nfor x in loop():\r\n if x == \"end\":\r\n exhausted[rank] = True\r\n continue\r\n # stop once the data from all the ranks are exhausted\r\n dist.all_reduce(exhausted)\r\n if torch.all(exhausted):\r\n break\r\n # do your forward pass + loss here\r\n # model.forward(...)\r\n print(x)\r\n```\r\n\r\non my laptop I run `torchrun --nnodes=1 --nproc-per-node=2 main.py` and I get\r\n\r\n```\r\n{'x': tensor([2])}\r\n{'x': tensor([1])}\r\n{'x': tensor([3])}\r\n{'x': tensor([4])}\r\n{'x': tensor([5])}\r\n{'x': tensor([2])}\r\n```\r\n\r\nwe indeed end up with 6 samples, `{'x': tensor([2])}` was repeated to get 6 examples in total which is divisible by the world size 2.\r\n\r\nI also tried with more ranks and with `num_workers` in DataLoader and it works as expected (don't forget to add `if __name__ == '__main__':` if necessary for DataLoader multiprocessing)\r\n\r\nEDIT: replaced `cycle(chain(dl, [\"end\"]))` by `loop()` after comment https://github.com/huggingface/datasets/issues/6623#issuecomment-2401063649 by @ragavsachdeva",
"great thanks for the example, will give it a try!",
"@lhoestq in the case where dataset.n_shards is divisible by world_size, is it important that each shard contains exactly the same number of samples? what happens if this isn't the case (in what circumstances will this cause a timeout)?",
"If your data are not evenly divisible (dataset.n_shards divisibility by world_size just changes the logic to distribute the data) you'll need some logic to make the GPUs happy at the end of training. E.g. with my [example above](https://github.com/huggingface/datasets/issues/6623#issuecomment-2379458138) to stop once all the data from all the ranks are exhausted (and repeating data if necessary to end up with a number of samples evenly divisible)\r\n\r\nThough if dataset.n_shards is divisible by world_size and each shard contains the same amount of data then your data IS evenly divisible so you are all good",
"Ok makes sense, thanks for the explanation. I guess even if the shards all contain the same amount of data you still have an issue if you do any filtering (https://github.com/huggingface/datasets/issues/6719)\r\n\r\nWhat do you think of dataset.repeat(n).take(samples_per_epoch) as a simple way of handling this kind of situation? (c.f. issue I just opened #7192 ).\r\n\r\n",
"yes it makes sense indeed",
"> I made a quick example with communication between ranks to stop once all the data from all the ranks are exhausted (and repeating data if necessary to end up with a number of samples evenly divisible)\r\n> \r\n> ```python\r\n> from itertools import cycle, chain\r\n> ...\r\n> # IMPORTANT: Loop over the local dataset until the data from each rank has been exhausted\r\n> for x in cycle(chain(dl, [\"end\"])):\r\n> if x == \"end\":\r\n> exhausted[rank] = True\r\n> continue\r\n> # stop once the data from all the ranks are exhausted\r\n> dist.all_reduce(exhausted)\r\n> if torch.all(exhausted):\r\n> break\r\n> # do your forward pass + loss here\r\n> # model.forward(...)\r\n> print(x)\r\n> ```\r\n\r\nJust incase someone copy pastes this in their code (like I did), please be aware of https://github.com/pytorch/pytorch/issues/23900 and use https://github.com/pytorch/pytorch/issues/23900#issuecomment-518858050.",
"Thanks for noticing @ragavsachdeva ! I edited my code to fix the issue",
"I have a node with 8 cards and training files splited into 56 sub files, so my n_shards= 56 / 8 = 7; my initial num_workers = 32, and it report that n_shards = 7 < num_workers, so 25 wokers are stoped, as a result, my training can use only 7 cpu cores at all. should I set my num_wokers less then 7 to get more cpu cores worked?",
"In your case each rank has a DataLoader with 7 running workers (and 25 stopped workers) so actually in total there are 8*7=56 DataLoader workers running (one per shard).\r\n\r\nIf you want to use more CPU for the DataLoader you can shard your dataset in more files than 56. E.g. if you want each rank to run 32 DataLoader workers you need 8*32=256 files.",
"Thank you for the help, I will have a try"
] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitted, I don’t have to use `DistributedSampler` (also they don't work with iterable datasets anyway)?
But in this case I noticed that the:
First iteraton:
first GPU will get → [1, 2]
first GPU will get → [3, 4]
Second iteraton:
first GPU will get → [5]
first GPU will get → Nothing
which actually creates an issue since in case of `DistributedSampler`, the samples are repeated internally to ensure non of the GPUs at any iteration is missing any data for gradient sync.
So my questions are:
1. Here since splitting is happening before hand, how to make sure each GPU get’s a batch at each iteration to avoid gradient sync issues?
2. Do we need to use `DistributedSampler`? If yes, how?
3. in the docstrings of `split_dataset_by_node`, this is mentioned: *"If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples."* Can you explain the last part here?
4. If `dataset.n_shards % world_size != 0`, is it possible to shard the streaming dataset on the fly to avoid the case where data is missing?
### Motivation
Somehow streaming datasets should work with DDP since for big LLMs a lot of data is required and DDP/multi-node is mostly used to train such models and streaming can actually help solve the data part of it.
### Your contribution
Yes, I can help in submitting the PR once we get mutual understanding on how it should behave. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6623/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6623/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6622/comments | https://api.github.com/repos/huggingface/datasets/issues/6622/events | https://github.com/huggingface/datasets/issues/6622 | 2,103,780,697 | I_kwDODunzps59ZSVZ | 6,622 | multi-GPU map does not work | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This should now be fixed by https://github.com/huggingface/datasets/pull/6550 and updated with https://github.com/huggingface/datasets/pull/6646\r\n\r\nFeel free to re-open if you're still having issues :)"
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y
Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy
Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-minute video than explain here):
https://youtu.be/RNbdPkSppc4
### Steps to reproduce the bug
-
### Expected behavior
-
### Environment info
x2 RTX A4000 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6622/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6622/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6621 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6621/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6621/comments | https://api.github.com/repos/huggingface/datasets/issues/6621/events | https://github.com/huggingface/datasets/issues/6621 | 2,103,675,294 | I_kwDODunzps59Y4me | 6,621 | deleted | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6621/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6621/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6620 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6620/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6620/comments | https://api.github.com/repos/huggingface/datasets/issues/6620/events | https://github.com/huggingface/datasets/issues/6620 | 2,103,110,536 | I_kwDODunzps59WuuI | 6,620 | wiki_dpr.py error (ID mismatch between lines {id} and vector {vec_id} | {
"avatar_url": "https://avatars.githubusercontent.com/u/101498700?v=4",
"events_url": "https://api.github.com/users/kiehls90/events{/privacy}",
"followers_url": "https://api.github.com/users/kiehls90/followers",
"following_url": "https://api.github.com/users/kiehls90/following{/other_user}",
"gists_url": "https://api.github.com/users/kiehls90/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kiehls90",
"id": 101498700,
"login": "kiehls90",
"node_id": "U_kgDOBgy_TA",
"organizations_url": "https://api.github.com/users/kiehls90/orgs",
"received_events_url": "https://api.github.com/users/kiehls90/received_events",
"repos_url": "https://api.github.com/users/kiehls90/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kiehls90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiehls90/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kiehls90",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Thanks for reporting, @kiehls90.\r\n\r\nAs this seems an issue with the specific \"wiki_dpr\" dataset, I am transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/wiki_dpr/discussions/13"
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I'm trying to run a rag example, and the dataset is wiki_dpr.
wiki_dpr download and extracting have been completed successfully.
However, at the generating train split stage, an error from wiki_dpr.py keeps popping up.
Especially in "_generate_examples" :
1. The following error occurs in the line **id, text, title = line.strip().split("\t")**
ValueError: not enough values to unpack (expected 3, got 2)
-> This part handles exceptions so that even if an error occurs, it passes.
2. **ID mismatch between lines {id} and vector {vec_id}**
This error seems to occur at the line " assert int(id) == int(vec_id),".
After I handled the exception in the split error, generating train split progressed to 80%, but an id mismatch error occurred at about the 16200000th vector id.
Debugging is even more difficult because it takes a long time to download and split wiki_dpr. I need help. thank you in advance!!
### Steps to reproduce the bug
Occurs in the generating train split step when running the rag example in the transformers repository.
Specifically, it is an error in wiki_dpr.py.
### Expected behavior
.
### Environment info
python 3.8 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6620/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6620/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6618/comments | https://api.github.com/repos/huggingface/datasets/issues/6618/events | https://github.com/huggingface/datasets/issues/6618 | 2,101,868,198 | I_kwDODunzps59R_am | 6,618 | While importing load_dataset from datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/77973415?v=4",
"events_url": "https://api.github.com/users/suprith-hub/events{/privacy}",
"followers_url": "https://api.github.com/users/suprith-hub/followers",
"following_url": "https://api.github.com/users/suprith-hub/following{/other_user}",
"gists_url": "https://api.github.com/users/suprith-hub/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/suprith-hub",
"id": 77973415,
"login": "suprith-hub",
"node_id": "MDQ6VXNlcjc3OTczNDE1",
"organizations_url": "https://api.github.com/users/suprith-hub/orgs",
"received_events_url": "https://api.github.com/users/suprith-hub/received_events",
"repos_url": "https://api.github.com/users/suprith-hub/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/suprith-hub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suprith-hub/subscriptions",
"type": "User",
"url": "https://api.github.com/users/suprith-hub",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! Can you please share the error's stack trace so we can see where it comes from?",
"We cannot reproduce the issue and we do not have enough information: environment info (need to run `datasets-cli env`), stack trace,...\r\n\r\nI am closing the issue. Feel free to reopen it (with additional information) if the problem persists.",
"Yeah 👍\r\n\r\nOn Tue, 6 Feb 2024 at 2:56 PM, Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> We cannot reproduce the issue and we do not have enough information:\r\n> environment info (need to run datasets-cli env), stack trace,...\r\n>\r\n> I am closing the issue. Feel free to reopen it (with additional\r\n> information) if the problem persists.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6618#issuecomment-1929102334>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ASS4PJ3XOIIWISPY3VX3QRTYSHZK5AVCNFSM6AAAAABCL3BT4SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMRZGEYDEMZTGQ>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Please downgrade the version of urllib3 if you have the same issue:\r\n\r\n!pip install urllib3==1.25.11",
"> Please downgrade the version of urllib3 if you have the same issue:\r\n> \r\n> !pip install urllib3==1.25.11\r\n\r\nThis worked for me. Thanks.\r\n\r\nI use python 3.11 and datasets==2.20.0. Downgrading urllib3 to 1.25.11 worked in my case."
] | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6618/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6618/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6615 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6615/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6615/comments | https://api.github.com/repos/huggingface/datasets/issues/6615/events | https://github.com/huggingface/datasets/issues/6615 | 2,098,951,409 | I_kwDODunzps59G3Tx | 6,615 | ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/22179777?v=4",
"events_url": "https://api.github.com/users/ftkeys/events{/privacy}",
"followers_url": "https://api.github.com/users/ftkeys/followers",
"following_url": "https://api.github.com/users/ftkeys/following{/other_user}",
"gists_url": "https://api.github.com/users/ftkeys/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ftkeys",
"id": 22179777,
"login": "ftkeys",
"node_id": "MDQ6VXNlcjIyMTc5Nzc3",
"organizations_url": "https://api.github.com/users/ftkeys/orgs",
"received_events_url": "https://api.github.com/users/ftkeys/received_events",
"repos_url": "https://api.github.com/users/ftkeys/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ftkeys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ftkeys/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ftkeys",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Sorry I posted in the wrong repo, please delete.. thanks!"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/22179777?v=4",
"events_url": "https://api.github.com/users/ftkeys/events{/privacy}",
"followers_url": "https://api.github.com/users/ftkeys/followers",
"following_url": "https://api.github.com/users/ftkeys/following{/other_user}",
"gists_url": "https://api.github.com/users/ftkeys/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ftkeys",
"id": 22179777,
"login": "ftkeys",
"node_id": "MDQ6VXNlcjIyMTc5Nzc3",
"organizations_url": "https://api.github.com/users/ftkeys/orgs",
"received_events_url": "https://api.github.com/users/ftkeys/received_events",
"repos_url": "https://api.github.com/users/ftkeys/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ftkeys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ftkeys/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ftkeys",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6615/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6615/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6614 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6614/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6614/comments | https://api.github.com/repos/huggingface/datasets/issues/6614/events | https://github.com/huggingface/datasets/issues/6614 | 2,098,884,520 | I_kwDODunzps59Gm-o | 6,614 | `datasets/downloads` cleanup tool | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,706 | null | CONTRIBUTOR | null | ### Feature request
Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files
e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do:
```
sudo find /data/huggingface/datasets/downloads -type f -mtime +3 -exec rm {} \+
sudo find /data/huggingface/datasets/downloads -type d -empty -delete
```
could the cleanup be integrated into `huggingface-cli` or a different tool provided to keep the folders tidy and not consume inodes and space
e.g. there were tens of thousands of `.lock` files - I don't know why they never get removed - lock files should be temporary for the duration of the operation requiring the lock and not remain after the operation finished, IMHO.
Also I think one should be able to nuke `datasets/downloads` w/o hurting the cache, but I think there are some datasets that rely on files extracted under this dir - or at least they did in the past - which is very difficult to manage since one has no idea what is safe to delete and what not.
Thank you
@Wauplin (requested to be tagged) | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6614/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6614/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6612/comments | https://api.github.com/repos/huggingface/datasets/issues/6612/events | https://github.com/huggingface/datasets/issues/6612 | 2,098,078,210 | I_kwDODunzps59DiIC | 6,612 | cnn_dailymail repeats itself | {
"avatar_url": "https://avatars.githubusercontent.com/u/8274752?v=4",
"events_url": "https://api.github.com/users/KeremZaman/events{/privacy}",
"followers_url": "https://api.github.com/users/KeremZaman/followers",
"following_url": "https://api.github.com/users/KeremZaman/following{/other_user}",
"gists_url": "https://api.github.com/users/KeremZaman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KeremZaman",
"id": 8274752,
"login": "KeremZaman",
"node_id": "MDQ6VXNlcjgyNzQ3NTI=",
"organizations_url": "https://api.github.com/users/KeremZaman/orgs",
"received_events_url": "https://api.github.com/users/KeremZaman/received_events",
"repos_url": "https://api.github.com/users/KeremZaman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KeremZaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KeremZaman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KeremZaman",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! We recently updated `cnn_dailymail` and now `datasets>=2.14` is needed to load it.\r\n\r\nYou can update `datasets` with\r\n\r\n```\r\npip install -U datasets\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be.
Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339.
Also I checked data:
```
>>> ds['train']['highlights'][0]
"Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday . Young actor says he has no plans to fritter his cash away . Radcliffe's earnings from first five Potter films have been held in trust fund ."````
>>> ds['train']['highlights'][0]
"Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday . Young actor says he has no plans to fritter his cash away . Radcliffe's earnings from first five Potter films have been held in trust fund ."````
>>> ds['train']['highlights'][287113]
"Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday .\nYoung actor says he has no plans to fritter his cash away .\nRadcliffe's earnings from first five Potter films have been held in trust fund ."````
>>> ds['train']['highlights'][574226]
"Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday .\nYoung actor says he has no plans to fritter his cash away .\nRadcliffe's earnings from first five Potter films have been held in trust fund ."
```
The datasets seems to be updated 6 days ago to convert it to Parquet. Probably, there is some issue with backward compatability.
### Steps to reproduce the bug
1.
```
from datasets import load_dataset
ds = load_dataset('cnn_dailymail', '3.0.0')
len(ds['train'])
```
### Expected behavior
It should not repeat itself.
### Environment info
datasets==2.13.2
Python==3.7.13 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6612/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6611/comments | https://api.github.com/repos/huggingface/datasets/issues/6611/events | https://github.com/huggingface/datasets/issues/6611 | 2,096,004,858 | I_kwDODunzps587n76 | 6,611 | `load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError` | {
"avatar_url": "https://avatars.githubusercontent.com/u/15320635?v=4",
"events_url": "https://api.github.com/users/zotroneneis/events{/privacy}",
"followers_url": "https://api.github.com/users/zotroneneis/followers",
"following_url": "https://api.github.com/users/zotroneneis/following{/other_user}",
"gists_url": "https://api.github.com/users/zotroneneis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zotroneneis",
"id": 15320635,
"login": "zotroneneis",
"node_id": "MDQ6VXNlcjE1MzIwNjM1",
"organizations_url": "https://api.github.com/users/zotroneneis/orgs",
"received_events_url": "https://api.github.com/users/zotroneneis/received_events",
"repos_url": "https://api.github.com/users/zotroneneis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zotroneneis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zotroneneis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zotroneneis",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,706 | null | NONE | null | ### Describe the bug
When loading a large dataset (>1000GB) from S3 I run into the following error:
```
Traceback (most recent call last):
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper
return await func(*args, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/aiobotocore/client.py", line 383, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the GetObject operation: The difference between the request time and the current time is too large.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/alp/phoneme-classification.monorepo/aws_sagemaker/data_processing/inspect_final_dataset.py", line 13, in <module>
dataset = load_from_disk("s3://speech-recognition-processed-data/whisper/de/train_data/", storage_options=storage_options)
File "/home/alp/.local/lib/python3.10/site-packages/datasets/load.py", line 1902, in load_from_disk
return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/home/alp/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1686, in load_from_disk
fs.download(src_dataset_path, [dest_dataset_path.as](http://dest_dataset_path.as/)_posix(), recursive=True)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1480, in download
return self.get(rpath, lpath, recursive=recursive, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 121, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 106, in sync
raise return_result
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 61, in _runner
result[0] = await coro
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 604, in _get
return await _run_coros_in_chunks(
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 257, in _run_coros_in_chunks
await asyncio.gather(*chunk, return_exceptions=return_exceptions),
File "/usr/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
return await fut
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1193, in _get_file
body, content_length = await _open_file(range=0)
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1184, in _open_file
resp = await self._call_s3(
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 348, in _call_s3
return await _error_wrapper(
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 140, in _error_wrapper
raise err
PermissionError: The difference between the request time and the current time is too large.
```
The usual problem for this error is that the time on my local machine is out of sync with the current time. However, this is not the case here. I checked the time and even reset it with no success. See resources here:
- https://stackoverflow.com/questions/4770635/s3-error-the-difference-between-the-request-time-and-the-current-time-is-too-la
- https://stackoverflow.com/questions/25964491/aws-s3-upload-fails-requesttimetooskewed
The error does not appear when loading a smaller dataset (e.g. our test set) from the same s3 path.
### Steps to reproduce the bug
1. Create large dataset
2. Try loading it from s3 using:
```
dataset = load_from_disk("s3://...", storage_options=storage_options)
```
### Expected behavior
Load dataset without running into this error.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.3
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6611/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6610/comments | https://api.github.com/repos/huggingface/datasets/issues/6610/events | https://github.com/huggingface/datasets/issues/6610 | 2,095,643,711 | I_kwDODunzps586Pw_ | 6,610 | cast_column to Sequence(subfeatures_dict) has err | {
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neiblegy",
"id": 16574677,
"login": "neiblegy",
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"organizations_url": "https://api.github.com/users/neiblegy/orgs",
"received_events_url": "https://api.github.com/users/neiblegy/received_events",
"repos_url": "https://api.github.com/users/neiblegy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neiblegy",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:\r\n```python\r\nais_dataset = ais_dataset.cast_column(\"my_labeled_bbox\", {\"bbox\": Sequence(Value(dtype=\"int64\")), \"label\": ClassLabel(names=[\"cat\", \"dog\"])})\r\n```",
"> Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:\r\n> \r\n> ```python\r\n> ais_dataset = ais_dataset.cast_column(\"my_labeled_bbox\", {\"bbox\": Sequence(Value(dtype=\"int64\")), \"label\": ClassLabel(names=[\"cat\", \"dog\"])})\r\n> ```\r\n\r\nthanks"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
example["my_labeled_bbox"] = {"bbox": [100,100,200,200], "label": "cat"}
return example
ais_dataset = ais_dataset.map(add_class, batched=False, num_proc=32)
ais_dataset = ais_dataset.cast_column("my_labeled_bbox", Sequence(
{
"bbox": Sequence(Value(dtype="int64")),
"label": ClassLabel(names=["cat", "dog"])
}))
print(ais_dataset[0])
```
However, executing this code results in an error:
```
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
int64
to
Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)
```
Upon examining the source code in datasets/table.py at line 2035:
```
if isinstance(feature, Sequence) and isinstance(feature.feature, dict):
feature = {
name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items()
}
```
I noticed that if subfeature is of type Sequence, the code results in Sequence(Sequence(...), ...) and Sequence(ClassLabel(...), ...), which appears to be the source of the error.
### Steps to reproduce the bug
run my demo code
### Expected behavior
no exception
### Environment info
python 3.9
datasets: 2.16.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neiblegy",
"id": 16574677,
"login": "neiblegy",
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"organizations_url": "https://api.github.com/users/neiblegy/orgs",
"received_events_url": "https://api.github.com/users/neiblegy/received_events",
"repos_url": "https://api.github.com/users/neiblegy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neiblegy",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6610/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6610/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6609/comments | https://api.github.com/repos/huggingface/datasets/issues/6609/events | https://github.com/huggingface/datasets/issues/6609 | 2,095,085,650 | I_kwDODunzps584HhS | 6,609 | Wrong path for cache directory in offline mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/42117435?v=4",
"events_url": "https://api.github.com/users/je-santos/events{/privacy}",
"followers_url": "https://api.github.com/users/je-santos/followers",
"following_url": "https://api.github.com/users/je-santos/following{/other_user}",
"gists_url": "https://api.github.com/users/je-santos/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/je-santos",
"id": 42117435,
"login": "je-santos",
"node_id": "MDQ6VXNlcjQyMTE3NDM1",
"organizations_url": "https://api.github.com/users/je-santos/orgs",
"received_events_url": "https://api.github.com/users/je-santos/received_events",
"repos_url": "https://api.github.com/users/je-santos/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/je-santos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/je-santos/subscriptions",
"type": "User",
"url": "https://api.github.com/users/je-santos",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null | [
"+1",
"same error in 2.16.1",
"@kongjiellx any luck with the issue?",
"I opened https://github.com/huggingface/datasets/pull/6632 to fix this issue. Once it's merged we'll do a new release of `datasets`",
"Thanks @lhoestq !"
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Dear huggingfacers,
I'm trying to use a subset of the-stack dataset. When I run the command the first time
```
dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )
```
It downloads the files and caches them normally.
Nevertheless, since my compute nodes are not online (`HF_DATASETS_OFFLINE=1`) . Whenever I try to run the command again, the library is passing the wrong cache path:
`Cache directory for the-stack doesn't exist at /Users/user/.cache/huggingface/datasets/bigcode___the-stack/default-data_dir=data%2Ffortran-data_dir=data%2Ffortran`
when the right path is:
`'/Users/user/.cache/huggingface/datasets/bigcode___the-stack/default-data_dir=data\%2Ffortran`
Not sure why those redundancies are included in the path. If I try adding the correct path through the the cache_dir argument it throws an error:
ConnectionError: Couldn't reach the Hugging Face Hub for dataset 'bigcode/the-stack': Offline mode is enabled.
Your help with this issue is greatly appreciated. Thanks a lot for the great work.
### Steps to reproduce the bug
1:
`dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )`
2:
`HF_DATASETS_OFFLINE=1`
3:
`dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )`
### Expected behavior
being able to use the cached data
### Environment info
several different systems | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6609/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6609/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6605/comments | https://api.github.com/repos/huggingface/datasets/issues/6605/events | https://github.com/huggingface/datasets/issues/6605 | 2,090,188,376 | I_kwDODunzps58lb5Y | 6,605 | ELI5 no longer available, but referenced in example code | {
"avatar_url": "https://avatars.githubusercontent.com/u/81480344?v=4",
"events_url": "https://api.github.com/users/drdsgvo/events{/privacy}",
"followers_url": "https://api.github.com/users/drdsgvo/followers",
"following_url": "https://api.github.com/users/drdsgvo/following{/other_user}",
"gists_url": "https://api.github.com/users/drdsgvo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/drdsgvo",
"id": 81480344,
"login": "drdsgvo",
"node_id": "MDQ6VXNlcjgxNDgwMzQ0",
"organizations_url": "https://api.github.com/users/drdsgvo/orgs",
"received_events_url": "https://api.github.com/users/drdsgvo/received_events",
"repos_url": "https://api.github.com/users/drdsgvo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/drdsgvo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drdsgvo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/drdsgvo",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Addressed in https://github.com/huggingface/transformers/pull/28715."
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | Here, an example code is given:
https://huggingface.co/docs/transformers/tasks/language_modeling
This code + article references the ELI5 dataset.
ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5
"Defunct: Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data.
Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable.
"
Please change the example code to use a different dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6605/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6604/comments | https://api.github.com/repos/huggingface/datasets/issues/6604/events | https://github.com/huggingface/datasets/issues/6604 | 2,089,713,945 | I_kwDODunzps58joEZ | 6,604 | Transform fingerprint collisions due to setting fixed random seed | {
"avatar_url": "https://avatars.githubusercontent.com/u/6687910?v=4",
"events_url": "https://api.github.com/users/normster/events{/privacy}",
"followers_url": "https://api.github.com/users/normster/followers",
"following_url": "https://api.github.com/users/normster/following{/other_user}",
"gists_url": "https://api.github.com/users/normster/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/normster",
"id": 6687910,
"login": "normster",
"node_id": "MDQ6VXNlcjY2ODc5MTA=",
"organizations_url": "https://api.github.com/users/normster/orgs",
"received_events_url": "https://api.github.com/users/normster/received_events",
"repos_url": "https://api.github.com/users/normster/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/normster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/normster/subscriptions",
"type": "User",
"url": "https://api.github.com/users/normster",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I've opened a PR with a fix.",
"I don't think the PR fixes the root cause, since it still relies on the `random` library which will often have its seed fixed. I think the builtin `uuid.uuid4()` is a better choice: https://docs.python.org/3/library/uuid.html"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random seed, which is common practice: https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_full.yaml#L45.
This results in fingerprint collisions which leads to silently loading incorrect cache files corresponding to completely different datasets.
### Steps to reproduce the bug
n/a
### Expected behavior
Use `uuid` v4 instead of `random.getrandbits()`
### Environment info
`datasets` main branch | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6604/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6603 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6603/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6603/comments | https://api.github.com/repos/huggingface/datasets/issues/6603/events | https://github.com/huggingface/datasets/issues/6603 | 2,089,230,766 | I_kwDODunzps58hyGu | 6,603 | datasets map `cache_file_name` does not work | {
"avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4",
"events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/ChenchaoZhao/followers",
"following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ChenchaoZhao",
"id": 35147961,
"login": "ChenchaoZhao",
"node_id": "MDQ6VXNlcjM1MTQ3OTYx",
"organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs",
"received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events",
"repos_url": "https://api.github.com/users/ChenchaoZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ChenchaoZhao",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Unfortunately, I'm unable to reproduce this error. Can you share the reproducer?",
"```\r\nds = datasets.Dataset.from_dict(dict(a=[i for i in range(100)]))\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-fn\") # this worked\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-folder/filename\") # this failed\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-folder/\") # this failed\r\n\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/tmp/whatever-folder/tmp1_izxvoo'\r\n```\r\n\r\nIt will fail if the filename parents do not exists. If we have `os.makedirs(\"/tmp/whatever-folder\")`, then it worked.\r\n\r\nMaybe add the `mkdir -p` into the map function?"
] | 1970-01-01T00:00:00.000001 | 1,706 | null | NONE | null | ### Describe the bug
In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work.
### Steps to reproduce the bug
1. pick a dataset
2. write a map function
3. do `ds.map(..., cache_file_name='some_filename')`
4. it crashes
### Expected behavior
It will tell you the filename you specified does not exist or it will generate a new file and tell you the filename does not exist.
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.12.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6603/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6603/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6602/comments | https://api.github.com/repos/huggingface/datasets/issues/6602/events | https://github.com/huggingface/datasets/issues/6602 | 2,089,217,483 | I_kwDODunzps58hu3L | 6,602 | Index error when data is large | {
"avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4",
"events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/ChenchaoZhao/followers",
"following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ChenchaoZhao",
"id": 35147961,
"login": "ChenchaoZhao",
"node_id": "MDQ6VXNlcjM1MTQ3OTYx",
"organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs",
"received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events",
"repos_url": "https://api.github.com/users/ChenchaoZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ChenchaoZhao",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,705 | null | NONE | null | ### Describe the bug
At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is
`total_size / min(max_shard_size, row_size)` which should be `total_size / max(max_shard_size, row_size)`
The fix is setting a larger `max_shard_size`
### Steps to reproduce the bug
1. create a dataset with large dense tensors per row
2. set a small `max_shard_size` say 1MB
3. `save_to_disk`
### Expected behavior
```
raise IndexError(f"Index {index} out of range for dataset of size {size}.")
IndexError: Index 10 out of range for dataset of size 10.
```
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.12.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6602/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6602/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6600/comments | https://api.github.com/repos/huggingface/datasets/issues/6600/events | https://github.com/huggingface/datasets/issues/6600 | 2,088,446,385 | I_kwDODunzps58eymx | 6,600 | Loading CSV exported dataset has unexpected format | {
"avatar_url": "https://avatars.githubusercontent.com/u/59572247?v=4",
"events_url": "https://api.github.com/users/OrianeN/events{/privacy}",
"followers_url": "https://api.github.com/users/OrianeN/followers",
"following_url": "https://api.github.com/users/OrianeN/following{/other_user}",
"gists_url": "https://api.github.com/users/OrianeN/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OrianeN",
"id": 59572247,
"login": "OrianeN",
"node_id": "MDQ6VXNlcjU5NTcyMjQ3",
"organizations_url": "https://api.github.com/users/OrianeN/orgs",
"received_events_url": "https://api.github.com/users/OrianeN/received_events",
"repos_url": "https://api.github.com/users/OrianeN/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OrianeN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OrianeN/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OrianeN",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi! Parquet is the only format that supports complex/nested features such as `Translation`. So, this should work:\r\n```python\r\ntest_dataset = load_dataset(\"opus100\", name=\"en-fr\", split=\"test\")\r\n\r\n# Save with .to_parquet()\r\ntest_parquet_path = \"try_testset_save.parquet\"\r\ntest_dataset.to_parquet(test_parquet_path)\r\n\r\n# Load dataset from the Parquet\r\nloaded_dataset = load_dataset(\"parquet\", data_files=test_parquet_path)\r\nprint(test_dataset_fromfile[0][\"translation\"])\r\nprint(test_dataset_fromfile[0][\"translation\"][\"en\"])\r\n```",
"Indeed this works great, thank you !"
] | 1970-01-01T00:00:00.000001 | 1,706 | null | NONE | null | ### Describe the bug
I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected.
### Steps to reproduce the bug
The documentation I've mainly consulted is https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/loading_methods#datasets.load_dataset and https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset (where I've found `.to_csv()`)
```python
# Load a dataset of translations
test_dataset = load_dataset("opus100", name="en-fr", split="test")
# Save with .to_csv()
test_csv_path = "try_testset_save.csv"
test_dataset.to_csv(test_csv_path)
# Load dataset from the CSV
loaded_dataset = load_dataset("csv", data_files=test_csv_path)
print(test_dataset_fromfile[0]["translation"])
print(test_dataset_fromfile[0]["translation"]["en"])
```
```
Creating CSV from Arrow format: 100%
2/2 [00:00<00:00, 47.99ba/s]
Downloading data files: 100%
1/1 [00:00<00:00, 65.33it/s]
Extracting data files: 100%
1/1 [00:00<00:00, 42.10it/s]
Generating train split:
2000/0 [00:00<00:00, 47486.09 examples/s]
{'en': "She wasn't going to vaccinate her kid against polio, no way.", 'fr': 'Elle ne vaccinerait pas son enfant contre la polio. Pas question.'}
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[29], line 11
9 loaded_dataset = load_dataset("csv", data_files=test_csv_path)
10 print(test_dataset_fromfile[0]["translation"])
---> 11 print(test_dataset_fromfile[0]["translation"]["en"])
TypeError: string indices must be integers, not 'str'
```
### Expected behavior
Each translation was saved as a stringified dict like `"{'en': ""She wasn't going to vaccinate her kid against polio, no way."", 'fr': 'Elle ne vaccinerait pas son enfant contre la polio. Pas question.'}"` where I would have expected 2 columns (1st with English segments, and 2nd with French segments), and I was expecting `load_dataset` to infer the type of feature automatically as I haven't seen anything about it in the documentation.
Do you have an example of how to effectively save and load datasets of translations ?
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.11.5
- `huggingface_hub` version: 0.16.4
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6600/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6599/comments | https://api.github.com/repos/huggingface/datasets/issues/6599/events | https://github.com/huggingface/datasets/issues/6599 | 2,086,684,664 | I_kwDODunzps58YEf4 | 6,599 | Easy way to segment into 30s snippets given an m4a file and a vtt file | {
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RonanKMcGovern",
"id": 78278410,
"login": "RonanKMcGovern",
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RonanKMcGovern",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi! Non-generic data processing is out of this library's scope, so it's downstream libraries/users' responsibility to implement such logic.",
"That's fair. Thanks"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already).
### Motivation
It's easy to create a vtt file from an audio file. If there could be auto-segmenting, this would make the creation of datasets much faster.
### Your contribution
I have made a custom script to do this but it's not all that clean - uses librosa and pydub. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6599/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6599/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6598/comments | https://api.github.com/repos/huggingface/datasets/issues/6598/events | https://github.com/huggingface/datasets/issues/6598 | 2,084,236,605 | I_kwDODunzps58Ou09 | 6,598 | Unexpected keyword argument 'hf' when downloading CSV dataset from S3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5592111?v=4",
"events_url": "https://api.github.com/users/dguenms/events{/privacy}",
"followers_url": "https://api.github.com/users/dguenms/followers",
"following_url": "https://api.github.com/users/dguenms/following{/other_user}",
"gists_url": "https://api.github.com/users/dguenms/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dguenms",
"id": 5592111,
"login": "dguenms",
"node_id": "MDQ6VXNlcjU1OTIxMTE=",
"organizations_url": "https://api.github.com/users/dguenms/orgs",
"received_events_url": "https://api.github.com/users/dguenms/received_events",
"repos_url": "https://api.github.com/users/dguenms/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dguenms/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dguenms/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dguenms",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I am facing similar issue while reading a csv file from s3. Wondering if somebody has found a workaround. ",
"same thing happened to other formats like parquet",
"I am facing similar issue while reading a parquet file from s3.\r\ni try with every version between 2.14 to 2.16.1 but it dosen't work ",
"Re-define the DownloadConfig might work:\r\n\r\n```\r\nclass ReviseDownloadConfig(DownloadConfig):\r\n def __post_init__(self, use_auth_token):\r\n if use_auth_token != \"deprecated\":\r\n warnings.warn(\r\n \"'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\\n\"\r\n f\"You can remove this warning by passing 'token={use_auth_token}' instead.\",\r\n FutureWarning,\r\n )\r\n self.token = use_auth_token\r\n\r\n def copy(self):\r\n return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})\r\n\r\ndownloadconfig = ReviseDownloadConfig()\r\n```\r\n",
"> Re-define the DownloadConfig might work:\r\n> \r\n> ```\r\n> class ReviseDownloadConfig(DownloadConfig):\r\n> def __post_init__(self, use_auth_token):\r\n> if use_auth_token != \"deprecated\":\r\n> warnings.warn(\r\n> \"'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\\n\"\r\n> f\"You can remove this warning by passing 'token={use_auth_token}' instead.\",\r\n> FutureWarning,\r\n> )\r\n> self.token = use_auth_token\r\n> ```\r\nThis seemed to work for me.\r\n",
"use pandas and then convert to `Dataset`",
"I am currently facing the same issue while using a custom loading script with files located in a remote S3 instance. I was using the `download_custom` functionality but now it is deprecated mentioning that I should use the native S3 loading, which is not working. \r\n\r\nAs stated before, the library forces the existence of a `hf` key in the `storage_options` variable, which is **not** accepted by `s3fs` : \r\n\r\n```python\r\n.../site-packages/s3fs/core.py\", line 516, in set_session\r\n self.session = aiobotocore.session.AioSession(**self.kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'hf'.\r\n````\r\n\r\nMeanwhile, if my `storage_options` var stays like:\r\n```python\r\n{'key': '...',\r\n 'secret': '...',\r\n 'client_kwargs': {'endpoint_url': '...'}}\r\n```\r\nit works alright. "
] | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`:
```
TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-with-unexpected-keyword-argument-error-in
Full stacktrace:
```
.../site-packages/datasets/load.py:2549: in load_dataset
builder_instance.download_and_prepare(
.../site-packages/datasets/builder.py:1005: in download_and_prepare
self._download_and_prepare(
.../site-packages/datasets/builder.py:1078: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
.../site-packages/datasets/packaged_modules/csv/csv.py:147: in _split_generators
data_files = dl_manager.download_and_extract(self.config.data_files)
.../site-packages/datasets/download/download_manager.py:562: in download_and_extract
return self.extract(self.download(url_or_urls))
.../site-packages/datasets/download/download_manager.py:426: in download
downloaded_path_or_paths = map_nested(
.../site-packages/datasets/utils/py_utils.py:466: in map_nested
mapped = [
.../site-packages/datasets/utils/py_utils.py:467: in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
.../site-packages/datasets/utils/py_utils.py:387: in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
.../site-packages/datasets/utils/py_utils.py:387: in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
.../site-packages/datasets/utils/py_utils.py:370: in _single_map_nested
return function(data_struct)
.../site-packages/datasets/download/download_manager.py:451: in _download
out = cached_path(url_or_filename, download_config=download_config)
.../site-packages/datasets/utils/file_utils.py:188: in cached_path
output_path = get_from_cache(
...1/site-packages/datasets/utils/file_utils.py:511: in get_from_cache
response = fsspec_head(url, storage_options=storage_options)
.../site-packages/datasets/utils/file_utils.py:316: in fsspec_head
fs, _, paths = fsspec.get_fs_token_paths(url, storage_options=storage_options)
.../site-packages/fsspec/core.py:622: in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
.../site-packages/fsspec/registry.py:290: in filesystem
return cls(**storage_options)
.../site-packages/fsspec/spec.py:79: in __call__
obj = super().__call__(*args, **kwargs)
.../site-packages/s3fs/core.py:187: in __init__
self.s3 = self.connect()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <s3fs.core.S3FileSystem object at 0x1500a1310>, refresh = True
def connect(self, refresh=True):
"""
Establish S3 connection object.
Parameters
----------
refresh : bool
Whether to create new session/client, even if a previous one with
the same parameters already exists. If False (default), an
existing one will be used if possible
"""
if refresh is False:
# back compat: we store whole FS instance now
return self.s3
anon, key, secret, kwargs, ckwargs, token, ssl = (
self.anon, self.key, self.secret, self.kwargs,
self.client_kwargs, self.token, self.use_ssl)
if not self.passed_in_session:
> self.session = botocore.session.Session(**self.kwargs)
E TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
### Steps to reproduce the bug
1. Assuming a valid CSV file located at `s3://bucket/data.csv`
2. Run the below code:
```
storage_options = {
"key": "...",
"secret": "...",
"client_kwargs": {
"endpoint_url": "...",
}
}
load_dataset("csv", data_files="s3://bucket/data.csv", storage_options=storage_options)
```
Encountered in version `2.16.1` but also reproduced in `2.16.0` and `2.15.0`.
Note: I encountered this in a unit test using a `moto` mock for S3, however since the error occurs before the session is instantiated, it should not be the issue.
### Expected behavior
No exception is raised, the boto3 session is created successfully, and the CSV file is downloaded successfully and returned as a dataset.
===
After some research I found that `DownloadConfig` has a `__post_init__` method that always forces this value to be set in its `storage_options`, even though in case of an S3 location the storage options get passed on to the S3 Session which does not expect this parameter. I assume this parameter is needed when reading from the huggingface hub and should not be set in this context.
Unfortunately there is nothing the user can do to work around it. Even if you manually do something like:
```
download_config = DownloadConfig()
del download_config.storage_options["hf"]
load_dataset("csv", data_files="s3://bucket/data.csv", download_config=download_config)
```
the library will still reinsert this parameter when `download_config = self.download_config.copy()` in line 418 of `download_manager.py` (`DownloadManager.download`).
Therefore `load_dataset` currently cannot be used to read a dataset in CSV format from an S3 location.
### Environment info
- `datasets` version: 2.16.1
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 9,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 9,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6598/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6597/comments | https://api.github.com/repos/huggingface/datasets/issues/6597/events | https://github.com/huggingface/datasets/issues/6597 | 2,083,708,521 | I_kwDODunzps58Mt5p | 6,597 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"It is caused by these code lines: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1688-L1694",
"Also note the information in the docstring: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1582-L1585\r\n\r\n> Also accepts `<dataset_name>`, which will default to the namespace of the logged-in user.\r\n\r\nThis behavior was \"reverted\" by the PR: \r\n- #6519\r\n\r\nWe have therefore contradictory requirements. We should decide:\r\n- whether to support passing dataset_namespace without user/org that defaults to the logged-in user (and not support canonical datasets)\r\n- or vice-versa, to support canonical datasets and not support passing only dataset_name\r\n\r\nAs canonical datasets are \"deprecated\" (and will eventually disappear), I would choose the first option. However, if so, the Space to convert datasets to Parquet will not work for canonical datasets: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet",
"IIUC, this could also be \"fixed\" by `create_repo(\"dataset_name\")` not defaulting to `create_repo(\"user/dataset_name\")` (when the user's token is available), which would be consistent with the rest of the `HfApi` ops used in the `push_to_hub` implementation. This is a (small) breaking change for `huggingface_hub`, but justified to make the API more consistent.",
"I tag @Wauplin to have his opinion as well.",
"Hmm, creating repo with implicit namespace (e.g. `create_repo(\"dataset_name\")`) is a convenient feature used in a lot of integrations. It is not consistent with other HfApi methods specifically because it is the method to create repos. Once the repo is created, the return value provides the explicit repo_id (`namespace/repo_name`) that has to be passed to every `HfApi` method. Otherwise, libraries/scripts would often need to do a `whoami` call to get the namespace before creating a repo.\r\n\r\n Another solution for https://github.com/huggingface/datasets/issues/6597#issuecomment-1893746690 could be that implicit namespace is allowed (same as today) except if the `repo_id` is in a hard-coded list of canonical datasets. This list can be maintained automatically and should be slowly decreasing. **Caveat:** as a normal user I wouldn't be able to implicitly push to `imagenet-1k` if I wanted to push to `Wauplin/imagenet-1k`. Shouldn't be too problematic, no? Worse case, would need to add a `whoami` call and allow implicit-canonical-name for non-HF users for instance (a bit too over-engineered IMO but doable). ",
"As canonical datasets are going to disappear in the following couple of months, I would not make any effort on their support.\r\n\r\nI propose reverting #6519, so that the behavior of `push_to_hub` is aligned with the one described in its dosctring: \"Also accepts `<dataset_name>`, which will default to the namespace of the logged-in user.\"\r\n\r\nI'm opening a PR."
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | MEMBER | null | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_description="Convert dataset to Parquet.",
create_pr=True,
token=token,
)
```
creates the additional dataset `albertvillanova/caner`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6597/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6595/comments | https://api.github.com/repos/huggingface/datasets/issues/6595/events | https://github.com/huggingface/datasets/issues/6595 | 2,082,896,148 | I_kwDODunzps58JnkU | 6,595 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! I think the issue comes from the \"float16\" features that are not supported yet in Parquet\r\n\r\nFeel free to open an issue in `pyarrow` about this. In the meantime, I'd encourage you to use \"float32\" for your \"pooled_prompt_embeds\" and \"prompt_embeds\" features.\r\n\r\nYou can cast them to \"float32\" using\r\n\r\n```python\r\nfrom datasets import Value\r\n\r\nds = ds.cast_column(\"pooled_prompt_embeds\", Value(\"float32\"))\r\nds = ds.cast_column(\"prompt_embeds\", Value(\"float32\"))\r\n```",
"@lhoestq hm. Thank you very much.\r\n\r\nDo you think it won't have any impact on the training? That it won't break it or the quality won't degrade because of this?\r\n\r\nI need to use it for [SDXL training](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py)",
"Increasing the precision should not degrade training (it only increases the precision), but make sure that it doesn't break your pytorch code (e.g. if it expects a float16 instead of a float32 somewhere)",
"@lhoestq just fyi pyarrow 15.0.0 (just released) supports float16 as the underlying parquetcpp does as well now :)",
"Oh that's amazing ! (and great timing ^^)\r\n\r\n@kopyl can you try to update `pyarrow` and try again ?\r\n\r\nBtw @assignUser there seems to be some casting implementations missing with float16 in 15.0.0, e.g.\r\n\r\n```\r\nArrowNotImplementedError: Unsupported cast from int64 to halffloat using function cast_half_float\r\n```\r\n\r\n```\r\nArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float\r\n```",
"Ah you are right casting is not implemented yet, it's even mentioned in the docs. This pr references the relevant issues if you'd like to track them\nhttps://github.com/apache/arrow/pull/38494",
"Cool thank you :)",
"@lhoestq i just recently found out that it's supported in 15.0.0, but wanted to try it first before telling you...\r\n\r\nTrying this right now and it seemingly works (although i need to wait till the end to make sure there is nothing wrong). Will update you when it's finished.\r\n\r\n<img width=\"918\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/17604849/4821e215-e782-4736-8c76-d06187078175\">\r\n\r\nA couple of questions though:\r\n\r\n1. What does that missing casting implementation mean for my specific case and what does it mean in general?\r\n2. Do you know how to `push_to_hub` with multiple processes?",
"@lhoestq also it's strange that there was no error for a dataset with the same features, same data type, but smaller (much smaller).\r\n\r\nAltho i'm not sure about this, but chances are the dataset was loaded directly, not `load_from_disk`.... Maybe because of this.",
"> What does that missing casting implementation mean for my specific case and what does it mean in general?\r\n\r\nNothing for you, just that casting to float16 using `.cast_column(\"my_column_name\", Value(\"float16\"))` raises an error\r\n\r\n> Do you know how to push_to_hub with multiple processes?\r\n\r\nIt's not possible (yet ?). Mostly because we haven't implemented yet how to do parallel uploads to the Hub from `datasets`.\r\nThough if you want faster uploads you can already enable `hf_transfer` \r\n\r\n```\r\npip install hf_transfer\r\n```\r\n\r\nand setting `HF_HUB_ENABLE_HF_TRANSFER=1` as an environment variable\r\n\r\nsee https://huggingface.co/docs/huggingface_hub/guides/upload#tips-and-tricks-for-large-uploads",
"@lhoestq thank you very much.\r\n\r\nThat would be amazing, I need to create a feature request for this :)\r\n\r\nBy the way, in short, how does hf_transfer improves the upload speed under the hood?",
"@lhoestq i was just able to successfully upload without the dataset with the new pyarrow update and without increasing the precision :)",
"Awesome !\r\n\r\nRegarding hf_transfer: it's been optimized in rust ;)",
"@lhoestq wow, cool :)"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import datasets
from datasets import load_from_disk
dataset = load_from_disk("ds")
datasets.config.DEFAULT_MAX_BATCH_SIZE = 1
dataset.push_to_hub("kopyl/ds", private=True, max_shard_size="500MB")
```
And i get this error:
`pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat`
Full traceback:
```
>>> dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, max_shard_size="500MB")
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1451/1451 [00:00<00:00, 6827.40 examples/s]
Uploading the dataset shards: 0%| | 0/2099 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py", line 1705, in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 5208, in _push_parquet_shards_to_hub
shard.to_parquet(buffer)
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 4931, in to_parquet
return ParquetDatasetWriter(self, path_or_buf, batch_size=batch_size, **parquet_writer_kwargs).write()
File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 129, in write
written = self._write(file_obj=self.path_or_buf, batch_size=batch_size, **self.parquet_writer_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 141, in _write
writer = pq.ParquetWriter(file_obj, schema=schema, **parquet_writer_kwargs)
File "/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py", line 1016, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat
```
Smaller datasets with the same way of saving and pushing work wonders. Big ones are not.
I'm currently trying to upload dataset like this:
`HfApi().upload_folder...`
But i'm not sure that in this case "load_dataset" would work well.
This setting num_shards does not help too:
```
dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, num_shards={'train': 500})
```
Tried 3000, 500, 478, 100
Also do you know if it's possible to push a dataset with multiple processes? It would take an eternity pushing 1TB...
### Steps to reproduce the bug
Described above
### Expected behavior
Should be able to upload...
### Environment info
Total dataset size: 978G
Amount of `.arrow` files: 2101
Each `.arrow` file size: 477M (i know 477 megabytes * 2101 does not equal 978G, but i just checked the size of a couple `.arrow` files, i don't know if some might have different size)
Some files:
- "ds/train/state.json": https://pastebin.com/tJ3ZLGAg
- "ds/train/dataset_info.json": https://pastebin.com/JdXMQ5ih | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6595/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6594/comments | https://api.github.com/repos/huggingface/datasets/issues/6594/events | https://github.com/huggingface/datasets/issues/6594 | 2,082,748,275 | I_kwDODunzps58JDdz | 6,594 | IterableDataset sharding logic needs improvement | {
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"events_url": "https://api.github.com/users/rwightman/events{/privacy}",
"followers_url": "https://api.github.com/users/rwightman/followers",
"following_url": "https://api.github.com/users/rwightman/following{/other_user}",
"gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwightman",
"id": 5702664,
"login": "rwightman",
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"organizations_url": "https://api.github.com/users/rwightman/orgs",
"received_events_url": "https://api.github.com/users/rwightman/received_events",
"repos_url": "https://api.github.com/users/rwightman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwightman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwightman",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I do not know is it the same probelm as mine. I think the num_workers should a value of process number for one dataloader mapped to one card, or the total number of processes for all multiple cards. \r\nbut when I set the num_workers larger then the count of training split files, it will report num_workers > n_shards and kill all workers over. as a result, only n_shards workers left, where `n_shard = total files count / total cards ` \r\nIs that means the num_workers should be the process number on one card? ok, I changed the num_workers lower, to view it as the number of loader process for one card, but this time, the data loading is still very slow, it seems that only num_workers dataloader process are working, not the num_workers * n_cards as I thought. \r\nSo how to set a good parameter to make good dataloading? "
] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Describe the bug
The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes.
Splitting across num_workers (per train process loader processes) and world_size (distributed training processes) appears inconsistent.
* worker split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1266-L1283
* distributed split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1335-L1356
In the case of the distributed split, there is a modulus check that flips between two very different behaviours, why is this different than splitting across the data loader workers? For IterableDatasets the DataLoaders worker processes are independent, so whether it's workers within one train process or across a distributed world the shards should be distributed the same, across `world_size * num_worker` independent workers in either case...
Further, the fallback case when the `n_shards % world_size == 0` check fails is a rather extreme change. I argue it is not desirable to do that implicitly, it should be an explicit case for specific scenarios (ie reliable validation). A train scenario would likely be much better handled with improved wrapping / stopping behaviour to eg also fix #6437. Changing from stepping shards to stepping samples means that every single process reads ALL of the shards. This was never an intended default for sharded training, shards gain their performance advantage in large scale distributed training by explicitly avoiding the need to have every process overlapping in the data they read, by default, only the data allocated to each process via their assigned shards should be read in each pass of the dataset.
Using a large scale CLIP example, some of the larger datasets have 10-20k shards across 100+TB of data. Training with 1000 GPUs we are switching between reading 100 terabytes per epoch to 100 petabytes if say change 20k % 1000 and drop one gpu-node to 20k % 992.
The 'step over samples' case might be worth the overhead in specific validation scenarios where gaurantees of at least/most once samples seen are more important and do not make up a significant portion of train time or are done in smaller world sizes outside of train.
### Steps to reproduce the bug
N/A
### Expected behavior
We have an iterable dataset with N shards, to split across workers
* shuffle shards (same seed across all train processes)
* step shard iterator across distributed processes
* step shard iterator across dataloader worker processes
* shuffle samples in every worker via shuffle buffer (different seed in each worker, but ideally controllable (based on base seed + worker id + epoch).
* end up with (possibly uneven) number of shards per worker but each shard only ever accessed by 1 worker per pass (epoch)
### Environment info
N/A | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6594/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6592 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6592/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6592/comments | https://api.github.com/repos/huggingface/datasets/issues/6592/events | https://github.com/huggingface/datasets/issues/6592 | 2,082,410,257 | I_kwDODunzps58Hw8R | 6,592 | Logs are delayed when doing .map when `docker logs` | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! `tqdm` doesn't work well in non-interactive environments, so there isn't much we can do about this. It's best to [disable it](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/utilities#datasets.disable_progress_bars) in such environments and instead use logging to track progress."
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed.
It's updating every few percent.
When you have a large dataset that has to be mapped (like 1+ million samples), it's crucial to see the updates in real-time, not every couple hours to make sure nothing got frozen or broken
### Steps to reproduce the bug
1. Run any huge dataset processing as a Docker image
2. `docker logs image_name` to it
### Expected behavior
...
### Environment info
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6592/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6592/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6591/comments | https://api.github.com/repos/huggingface/datasets/issues/6591/events | https://github.com/huggingface/datasets/issues/6591 | 2,082,378,957 | I_kwDODunzps58HpTN | 6,591 | The datasets models housed in Dropbox can't support a lot of users downloading them | {
"avatar_url": "https://avatars.githubusercontent.com/u/4933774?v=4",
"events_url": "https://api.github.com/users/RDaneelOlivav/events{/privacy}",
"followers_url": "https://api.github.com/users/RDaneelOlivav/followers",
"following_url": "https://api.github.com/users/RDaneelOlivav/following{/other_user}",
"gists_url": "https://api.github.com/users/RDaneelOlivav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RDaneelOlivav",
"id": 4933774,
"login": "RDaneelOlivav",
"node_id": "MDQ6VXNlcjQ5MzM3NzQ=",
"organizations_url": "https://api.github.com/users/RDaneelOlivav/orgs",
"received_events_url": "https://api.github.com/users/RDaneelOlivav/received_events",
"repos_url": "https://api.github.com/users/RDaneelOlivav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RDaneelOlivav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RDaneelOlivav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RDaneelOlivav",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! Indeed, Dropbox is not a reliable host. I've just merged https://huggingface.co/datasets/PolyAI/minds14/discussions/24 to fix this by hosting the data files inside the repo."
] | 1970-01-01T00:00:00.000001 | 1,705 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I'm using the datasets
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails:
`raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://www.dropbox.com/s/e2us0hcs3ilr20e/MInDS-14.zip?dl=1 (error 429)`
My question is if we can somehow host these files elsewhere or can you change the limit of simultaneous users accessing those resources or any other solution?
Also, has anyone had this issue before?
Thanks
### Steps to reproduce the bug
1: Create a python script like so:
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
2: Execute this by a certain number of users at the same time
### Expected behavior
I woudl expect that this shouldnt happen unless its a huge amount of users, which it is not the case
### Environment info
This was done in an Ubuntu 22 environment. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6591/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6591/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6590/comments | https://api.github.com/repos/huggingface/datasets/issues/6590/events | https://github.com/huggingface/datasets/issues/6590 | 2,082,000,084 | I_kwDODunzps58GMzU | 6,590 | Feature request: Multi-GPU dataset mapping for SDXL training | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,705 | null | NONE | null | ### Feature request
We need to speed up SDXL dataset pre-process. Please make it possible to use multiple GPUs for the [official SDXL trainer](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) :)
### Motivation
Pre-computing 3 million of images takes around 2 days.
Would be nice to be able to be able to do multi-GPU (or even better – multi-GPU + multi-node) vae and embedding precompute...
### Your contribution
I'm not sure i can wrap my head around the multi-GPU mapping...
Plus it's too expensive for me to take x2 A100 and spend a day just figuring out the staff since I don't have a job right now. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6590/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6589/comments | https://api.github.com/repos/huggingface/datasets/issues/6589/events | https://github.com/huggingface/datasets/issues/6589 | 2,081,358,619 | I_kwDODunzps58DwMb | 6,589 | After `2.16.0` version, there are `PermissionError` when users use shared cache_dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/106717516?v=4",
"events_url": "https://api.github.com/users/minhopark-neubla/events{/privacy}",
"followers_url": "https://api.github.com/users/minhopark-neubla/followers",
"following_url": "https://api.github.com/users/minhopark-neubla/following{/other_user}",
"gists_url": "https://api.github.com/users/minhopark-neubla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/minhopark-neubla",
"id": 106717516,
"login": "minhopark-neubla",
"node_id": "U_kgDOBlxhTA",
"organizations_url": "https://api.github.com/users/minhopark-neubla/orgs",
"received_events_url": "https://api.github.com/users/minhopark-neubla/received_events",
"repos_url": "https://api.github.com/users/minhopark-neubla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/minhopark-neubla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minhopark-neubla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/minhopark-neubla",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"We'll do a new release of `datasets` in the coming days with a fix !",
"@lhoestq Thank you very much!"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
- We use shared `cache_dir` using `HF_HOME="{shared_directory}"`
- After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445
- But, `filelock` package make `.lock` file with `644` permission
- Dataset is not available to other users except the user who created the lock file via `load_dataset`.
### Steps to reproduce the bug
1. `pip install datasets==2.16.0`
2. `export HF_HOME="{shared_directory}"`
3. download dataset with `load_dataset`
4. logout and login another user
5. `pip install datasets==2.16.0`
6. `export HF_HOME="{shared_directory}"`
7. download dataset with `load_dataset`
8. `PermissionError` occurs
### Expected behavior
- Users can share `cache_dir` using environment variable `HF_HOME`
### Environment info
- python == 3.9.10
- datasets == 2.16.0
- ubuntu 22.04
- shared_directory has ACL

- users are same group (developers)
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6589/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6588/comments | https://api.github.com/repos/huggingface/datasets/issues/6588/events | https://github.com/huggingface/datasets/issues/6588 | 2,081,284,253 | I_kwDODunzps58DeCd | 6,588 | fix os.listdir return name is empty string | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
xlistdir return name is empty string
Overloaded os.listdir
### Steps to reproduce the bug
```python
from datasets.download.streaming_download_manager import xjoin
from datasets.download.streaming_download_manager import xlistdir
config = DownloadConfig(storage_options=options)
manger = StreamingDownloadManager("ILSVRC2012",download_config=config)
input_path = "lakefs://datalab/main/imagenet/ILSVRC2012.zip"
download_files = manger.download_and_extract(input_path)
current_dir = xjoin(download_files,"ILSVRC2012/Images/ILSVRC2012_img_train")
folder_list = xlistdir(current_dir)
```
in xlistdir function
Obj ["name"] ends with "/"
last return ""
### Expected behavior
Obj ["name"] ends with "/"
return folder name
### Environment info
no | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6588/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6585/comments | https://api.github.com/repos/huggingface/datasets/issues/6585/events | https://github.com/huggingface/datasets/issues/6585 | 2,078,874,005 | I_kwDODunzps576RmV | 6,585 | losing DatasetInfo in Dataset.map when num_proc > 1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4",
"events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}",
"followers_url": "https://api.github.com/users/JochenSiegWork/followers",
"following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}",
"gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JochenSiegWork",
"id": 135010976,
"login": "JochenSiegWork",
"node_id": "U_kgDOCAwaoA",
"organizations_url": "https://api.github.com/users/JochenSiegWork/orgs",
"received_events_url": "https://api.github.com/users/JochenSiegWork/received_events",
"repos_url": "https://api.github.com/users/JochenSiegWork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JochenSiegWork",
"user_view_type": "public"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4",
"events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}",
"followers_url": "https://api.github.com/users/JochenSiegWork/followers",
"following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}",
"gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JochenSiegWork",
"id": 135010976,
"login": "JochenSiegWork",
"node_id": "U_kgDOCAwaoA",
"organizations_url": "https://api.github.com/users/JochenSiegWork/orgs",
"received_events_url": "https://api.github.com/users/JochenSiegWork/received_events",
"repos_url": "https://api.github.com/users/JochenSiegWork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JochenSiegWork",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4",
"events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}",
"followers_url": "https://api.github.com/users/JochenSiegWork/followers",
"following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}",
"gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JochenSiegWork",
"id": 135010976,
"login": "JochenSiegWork",
"node_id": "U_kgDOCAwaoA",
"organizations_url": "https://api.github.com/users/JochenSiegWork/orgs",
"received_events_url": "https://api.github.com/users/JochenSiegWork/received_events",
"repos_url": "https://api.github.com/users/JochenSiegWork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JochenSiegWork",
"user_view_type": "public"
}
] | null | [
"Hi ! This issue comes from the fact that `map()` with `num_proc>1` shards the dataset in multiple chunks to be processed (one per process) and merges them. The DatasetInfos of each chunk are then merged together, but for some fields like `dataset_name` it's not been implemented and default to None.\r\n\r\nThe DatasetInfo merge is defined here, in case you'd like to contribute an improvement: \r\n\r\nhttps://github.com/huggingface/datasets/blob/d2e0034122a788015c0834a72e6c6279e7ecbac5/src/datasets/info.py#L269-L270",
"#self-assign"
] | 1970-01-01T00:00:00.000001 | 1,705 | null | CONTRIBUTOR | null | ### Describe the bug
Hello and thanks for developing this package!
When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset.
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetInfo
def run_map(num_proc):
dataset = Dataset.from_dict(
{"col1": [0, 1], "col2": [3, 4]},
info=DatasetInfo(
dataset_name="my_dataset",
),
)
ds = dataset.map(lambda x: x, num_proc=num_proc)
print(ds.info.dataset_name)
run_map(1)
run_map(2)
```
This puts out:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
None
```
### Expected behavior
I expect the DatasetInfo to be kept as it was and there should be no difference in the output of running map with num_proc=1 and num_proc=2.
Expected output:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
my_dataset
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
- Python version: 3.8.18
- `huggingface_hub` version: 0.20.2
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.9.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6585/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6584/comments | https://api.github.com/repos/huggingface/datasets/issues/6584/events | https://github.com/huggingface/datasets/issues/6584 | 2,078,454,878 | I_kwDODunzps574rRe | 6,584 | np.fromfile not supported | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@lhoestq\r\nCan you provide me with some ideas?",
"Hi ! What's the error ?",
"@lhoestq \r\n```\r\nTraceback (most recent call last):\r\n File \"/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/runpy.py\", line 88, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py\", line 39, in <module>\r\n cli.main()\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 284, in run_file\r\n runpy.run_path(target, run_name=\"__main__\")\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 321, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 135, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/home/dongzf/.vscode/extensions/ms-python.python-2023.22.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 124, in _run_code\r\n exec(code, run_globals)\r\n File \"/mnt/sda/code/dataset_ai/dataset_ai/example/test.py\", line 83, in <module>\r\n data = xnumpy_fromfile(current_dir, download_config=config,dtype=numpy.float32,)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/mnt/sda/code/dataset_ai/dataset_ai/src/datasets/download/streaming_download_manager.py\", line 765, in xnumpy_fromfile\r\n return np.fromfile(xopen(filepath_or_buffer, \"rb\", download_config=download_config).read(), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nValueError: embedded null byte\r\n```",
" not add read() \r\nthe error is \r\n\r\nreturn np.fromfile(xopen(filepath_or_buffer, \"rb\", download_config=download_config), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nio.UnsupportedOperation: fileno",
"xopen return obj do not have fileno function\r\nI don't know why?",
"I used this method to read point cloud data in the script\r\n\r\n\r\n```python\r\nwith open(velodyne_filepath,\"rb\") as obj:\r\n velodyne_data = numpy.frombuffer(obj.read(), dtype=numpy.float32).reshape([-1, 4])\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,705 | null | CONTRIBUTOR | null | How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *args, **kwargs)
else:
filepath_or_buffer = str(filepath_or_buffer)
return np.fromfile(xopen(filepath_or_buffer, "rb", download_config=download_config).read(), *args, **kwargs)
```
this is not work
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6584/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6580/comments | https://api.github.com/repos/huggingface/datasets/issues/6580/events | https://github.com/huggingface/datasets/issues/6580 | 2,075,645,042 | I_kwDODunzps57t9Ry | 6,580 | dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs. | {
"avatar_url": "https://avatars.githubusercontent.com/u/78641018?v=4",
"events_url": "https://api.github.com/users/kartikgupta321/events{/privacy}",
"followers_url": "https://api.github.com/users/kartikgupta321/followers",
"following_url": "https://api.github.com/users/kartikgupta321/following{/other_user}",
"gists_url": "https://api.github.com/users/kartikgupta321/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kartikgupta321",
"id": 78641018,
"login": "kartikgupta321",
"node_id": "MDQ6VXNlcjc4NjQxMDE4",
"organizations_url": "https://api.github.com/users/kartikgupta321/orgs",
"received_events_url": "https://api.github.com/users/kartikgupta321/received_events",
"repos_url": "https://api.github.com/users/kartikgupta321/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kartikgupta321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kartikgupta321/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kartikgupta321",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,705 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir.
### Steps to reproduce the bug
dataset = []
dataset_name = "ai2_arc"
possible_configs = [
'ARC-Challenge',
'ARC-Easy'
]
for config in possible_configs:
dataset_slice = load_dataset(dataset_name, config,ignore_verifications=True,cache_dir='ai2_arc_files')
dataset.append(dataset_slice)
### Expected behavior
all configs should get saved in cache with their respective names.
### Environment info
ai2_arc | {
"avatar_url": "https://avatars.githubusercontent.com/u/78641018?v=4",
"events_url": "https://api.github.com/users/kartikgupta321/events{/privacy}",
"followers_url": "https://api.github.com/users/kartikgupta321/followers",
"following_url": "https://api.github.com/users/kartikgupta321/following{/other_user}",
"gists_url": "https://api.github.com/users/kartikgupta321/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kartikgupta321",
"id": 78641018,
"login": "kartikgupta321",
"node_id": "MDQ6VXNlcjc4NjQxMDE4",
"organizations_url": "https://api.github.com/users/kartikgupta321/orgs",
"received_events_url": "https://api.github.com/users/kartikgupta321/received_events",
"repos_url": "https://api.github.com/users/kartikgupta321/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kartikgupta321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kartikgupta321/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kartikgupta321",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6580/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6579/comments | https://api.github.com/repos/huggingface/datasets/issues/6579/events | https://github.com/huggingface/datasets/issues/6579 | 2,075,407,473 | I_kwDODunzps57tDRx | 6,579 | Unable to load `eli5` dataset with streaming | {
"avatar_url": "https://avatars.githubusercontent.com/u/89672451?v=4",
"events_url": "https://api.github.com/users/haok1402/events{/privacy}",
"followers_url": "https://api.github.com/users/haok1402/followers",
"following_url": "https://api.github.com/users/haok1402/following{/other_user}",
"gists_url": "https://api.github.com/users/haok1402/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/haok1402",
"id": 89672451,
"login": "haok1402",
"node_id": "MDQ6VXNlcjg5NjcyNDUx",
"organizations_url": "https://api.github.com/users/haok1402/orgs",
"received_events_url": "https://api.github.com/users/haok1402/received_events",
"repos_url": "https://api.github.com/users/haok1402/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/haok1402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haok1402/subscriptions",
"type": "User",
"url": "https://api.github.com/users/haok1402",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi @haok1402, I have created an issue in the Discussion tab of the corresponding dataset: https://huggingface.co/datasets/eli5/discussions/7\r\nLet's continue the discussion there!"
] | 1970-01-01T00:00:00.000001 | 1,704 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Unable to load `eli5` dataset with streaming.
### Steps to reproduce the bug
This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions
```
from datasets import load_dataset
load_dataset("eli5", streaming=True)
```
This works correctly.
```
from datasets import load_dataset
load_dataset("eli5")
```
### Expected behavior
- Loading `eli5` dataset should not raise an error under the streaming mode.
- Or at the very least, show a warning that streaming mode is not supported with `eli5` dataset.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.6.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6579/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6577/comments | https://api.github.com/repos/huggingface/datasets/issues/6577/events | https://github.com/huggingface/datasets/issues/6577 | 2,074,790,848 | I_kwDODunzps57qsvA | 6,577 | 502 Server Errors when streaming large dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi",
"user_view_type": "public"
} | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | [] | null | [
"cc @mariosasko @lhoestq ",
"Hi! We should be able to avoid this error by retrying to read the data when it happens. I'll open a PR in `huggingface_hub` to address this.",
"Thanks for the fix @mariosasko! Just wondering whether \"500 error\" should also be excluded? I got these errors overnight:\r\n\r\n```\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/da\r\ntasets/sanchit-gandhi/concatenated-train-set-label-length-256/resolve/91e6a0cd0356605b021384ded813cfcf356a221c/train/tra\r\nin-02618-of-04012.parquet (Request ID: Root=1-65b18b81-627f2c2943bbb8ab68d19ee2;129537bd-1934-4257-a4d8-1cb774f8e1f8) \r\n \r\nInternal Error - We're working hard to fix this as soon as possible! \r\n```",
"Gently pining @mariosasko and @Wauplin - when trying to stream this large dataset from the HF Hub, I'm running into `500 Internal Server Errors` as described above. I'd love to be able to use the Hub exclusively to stream data when training, but this error pops up a few times a week, terminating training runs and causing me to have to rewind to the last saved checkpoint. Do we reckon there's a way we can protect Datasets' streaming against these errors? The same reproducer as the [original comment](https://github.com/huggingface/datasets/issues/6577#issue-2074790848) can be used, but it's somewhat random whether we hit a 500 error. Leaving the full traceback below: \r\n\r\n```\r\nTraceback (most recent call last): \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py\", line 308, in _worker_loo\r\np \r\n data = fetcher.fetch(index) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py\", line 32, in fetch \r\n data.append(next(self.dataset_iter)) \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 1367, in __iter__ \r\n yield from self._iter_pytorch() \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 1302, in _iter_pytorch \r\n for key, example in ex_iterable: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 987, in __iter__ \r\n for x in self.ex_iterable: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 867, in __iter__ \r\n yield from self._iter() \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 904, in _iter \r\n for key, example in iterator: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 679, in __iter__ \r\n yield from self._iter() \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 741, in _iter [235/1892]\r\n for key, example in iterator: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 1119, in __iter__ \r\n for key, example in self.ex_iterable: \r\n File \"/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py\", line 282, in __iter__ \r\n for key, pa_table in self.generate_tables_fn(**self.kwargs): \r\n File \"/home/sanchitgandhi/datasets/src/datasets/packaged_modules/parquet/parquet.py\", line 87, in _generate_tables \r\n for batch_idx, record_batch in enumerate( \r\n File \"pyarrow/_parquet.pyx\", line 1587, in iter_batches \r\n File \"pyarrow/types.pxi\", line 88, in pyarrow.lib._datatype_to_pep3118 \r\n File \"/home/sanchitgandhi/datasets/src/datasets/download/streaming_download_manager.py\", line 342, in read_with_retrie\r\ns \r\n out = read(*args, **kwargs) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/fsspec/spec.py\", line 1856, in read \r\n out = self.cache._fetch(self.loc, self.loc + length) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/fsspec/caching.py\", line 189, in _fetch \r\n self.cache = self.fetcher(start, end) # new block replaces old \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py\", line 629, in _fetch_rang\r\ne \r\n hf_raise_for_status(r) \r\n File \"/home/sanchitgandhi/hf/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py\", line 362, in hf_raise_for\r\n_status \r\n raise HfHubHTTPError(str(e), response=response) from e \r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/da\r\ntasets/sanchit-gandhi/concatenated-train-set-label-length-256-conditioned/resolve/3c3c0cce51df9f9d2e75968bb2a1851894f504\r\n0d/train/train-03515-of-04010.parquet (Request ID: Root=1-65c7c4c4-153fe71401558c8c2d272c8a;fec3ec68-4a0a-4bfd-95ba-b0a0\r\n5684d612) \r\n \r\nInternal Error - We're working hard to fix this as soon as possible! ",
"@sanchit-gandhi thanks for the feedback. I've opened https://github.com/huggingface/huggingface_hub/pull/2026 to make the download process more robust. I believe that you've witness this problem on Saturday due to the Hub outage. Hope the PR will make your life easier though :)",
"Awesome, thanks @Wauplin! Makes sense re the Hub outage"
] | 1970-01-01T00:00:00.000001 | 1,707 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming:
```
huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet
```
This is despite the parquet file definitely existing on the Hub: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/blob/main/train/train-00228-of-07135.parquet
And having the correct commit id: [7d2acc5c59de848e456e951a76e805304d6fb350](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/commits/main/train)
I’m wondering whether this is coming from datasets? Or from the Hub side?
### Steps to reproduce the bug
Reproducer:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
from tqdm import tqdm
NUM_EPOCHS = 20
dataset = load_dataset("sanchit-gandhi/concatenated-train-set", "train", streaming=True)
dataset = dataset.with_format("torch")
dataloader = DataLoader(dataset["train"], batch_size=256, drop_last=True, pin_memory=True, num_workers=16)
for epoch in tqdm(range(NUM_EPOCHS), desc="Epoch", position=0):
for batch in tqdm(dataloader, desc="Batch", position=1):
continue
```
Running the above script tends to fail within about 2 hours with a traceback like the following:
<details>
<summary> Traceback: </summary>
```python
1029 for batch in train_loader:
1030 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
1031 data = self._next_data()
1032 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data
1033 return self._process_data(data)
1034 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
1035 data.reraise()
1036 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 694, in reraise
1037 raise exception
1038 huggingface_hub.utils._errors.HfHubHTTPError: Caught HfHubHTTPError in DataLoader worker process 10.
1039 Original Traceback (most recent call last):
1040 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 286, in hf_raise_for_status
1041 response.raise_for_status()
1042 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
1043 raise HTTPError(http_error_msg, response=self)
1044 requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet
1045 The above exception was the direct cause of the following exception:
1046 Traceback (most recent call last):
1047 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
1048 data = fetcher.fetch(index)
1049 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
1050 data.append(next(self.dataset_iter))
1051 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1363, in __iter__
1052 yield from self._iter_pytorch()
1053 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1298, in _iter_pytorch
1054 for key, example in ex_iterable:
1055 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 983, in __iter__
1056 for x in self.ex_iterable:
1057 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__
1058 yield from self._iter()
1059 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter
1060 for key, example in iterator:
1061 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__
1062 yield from self._iter()
1063 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter
1064 for key, example in iterator:
1065 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__
1066 yield from self._iter()
1067 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter
1068 for key, example in iterator:
1069 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__
1070 for key, example in self.ex_iterable:
1071 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__
1072 yield from self._iter()
1073 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter
1074 for key, example in iterator:
1075 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__
1076 for key, example in self.ex_iterable:
1077 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 282, in __iter__
1078 for key, pa_table in self.generate_tables_fn(**self.kwargs):
1079 File "/home/sanchitgandhi/datasets/src/datasets/packaged_modules/parquet/parquet.py", line 87, in _generate_tables
1080 for batch_idx, record_batch in enumerate(
1081 File "pyarrow/_parquet.pyx", line 1367, in iter_batches
1082 File "pyarrow/types.pxi", line 88, in pyarrow.lib._datatype_to_pep3118
1083 File "/home/sanchitgandhi/datasets/src/datasets/download/streaming_download_manager.py", line 341, in read_with_retries
1084 out = read(*args, **kwargs)
1085 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/spec.py", line 1856, in read
1086 out = self.cache._fetch(self.loc, self.loc + length)
1087 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/caching.py", line 189, in _fetch
1088 self.cache = self.fetcher(start, end) # new block replaces old
1089 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range
1090 hf_raise_for_status(r)
1091 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status
1092 raise HfHubHTTPError(str(e), response=response) from e
1093 huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet
```
</details>
### Expected behavior
Should be able to stream the dataset without any 502 error.
### Environment info
- `datasets` version: 2.16.2.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- `huggingface_hub` version: 0.20.1
- PyArrow version: 14.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6577/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6577/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6576/comments | https://api.github.com/repos/huggingface/datasets/issues/6576/events | https://github.com/huggingface/datasets/issues/6576 | 2,073,710,124 | I_kwDODunzps57mk4s | 6,576 | document page 404 not found after redirection | {
"avatar_url": "https://avatars.githubusercontent.com/u/39179888?v=4",
"events_url": "https://api.github.com/users/annahung31/events{/privacy}",
"followers_url": "https://api.github.com/users/annahung31/followers",
"following_url": "https://api.github.com/users/annahung31/following{/other_user}",
"gists_url": "https://api.github.com/users/annahung31/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/annahung31",
"id": 39179888,
"login": "annahung31",
"node_id": "MDQ6VXNlcjM5MTc5ODg4",
"organizations_url": "https://api.github.com/users/annahung31/orgs",
"received_events_url": "https://api.github.com/users/annahung31/received_events",
"repos_url": "https://api.github.com/users/annahung31/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/annahung31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/annahung31/subscriptions",
"type": "User",
"url": "https://api.github.com/users/annahung31",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix."
] | 1970-01-01T00:00:00.000001 | 1,705 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The redirected page encountered 404 not found.
### Steps to reproduce the bug
1. In this tutorial: https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt
original md: https://github.com/huggingface/course/blob/2c733c2246b8b7e0e6f19a9e5d15bb12df43b2a3/chapters/en/chapter5/4.mdx#L49
```
By default, 🤗 Datasets will decompress the files needed to load a dataset. If you want to preserve hard drive space, you can pass `DownloadConfig(delete_extracted=True)` to the `download_config` argument of `load_dataset()`. See the [documentation](https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig) for more details.
```
The documentation points to `https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig`
it shows `The documentation page PACKAGE_REFERENCE/BUILDER_CLASSES.HTML doesn’t exist in v2.16.1, but exists on the main version. Click [here](https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html) to redirect to the main version of the documentation.`
But the redirected website `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html` is 404 not found.
### Expected behavior
I Guess the redirected webisite should be
`https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes` (without `.html`)
or `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes#datasets.DownloadConfig`.
### Environment info
Datasets main | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6576/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6576/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6571/comments | https://api.github.com/repos/huggingface/datasets/issues/6571/events | https://github.com/huggingface/datasets/issues/6571 | 2,072,111,000 | I_kwDODunzps57geeY | 6,571 | Make DatasetDict.column_names return a list instead of dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,704 | null | MEMBER | null | Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values.
However, by construction, all splits have the same column names.
I think it makes more sense to return a single list with the column names, which is the same for all the split keys. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6571/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6571/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6570/comments | https://api.github.com/repos/huggingface/datasets/issues/6570/events | https://github.com/huggingface/datasets/issues/6570 | 2,071,805,265 | I_kwDODunzps57fT1R | 6,570 | No online docs for 2.16 release | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [
"Though the `build / build_main_documentation` CI job ran for 2.16.0: https://github.com/huggingface/datasets/actions/runs/7300836845/job/19896275099 🤔 ",
"Yes, I saw it. Maybe @mishig25 can give us some hint...",
"fixed https://huggingface.co/docs/datasets/v2.16.0/en/index",
"Still missing 2.16.1.",
"> Still missing 2.16.1.\r\n\r\nre-running the doc-buld job for the missing ones should fix\r\n\r\n",
"Re-running the job for the 2.16.1 release: https://github.com/huggingface/datasets/actions/runs/7365231552/job/20310278583",
"Fixed for 2.16.1: https://huggingface.co/docs/datasets/v2.16.1/en/index"
] | 1970-01-01T00:00:00.000001 | 1,704 | 1970-01-01T00:00:00.000001 | MEMBER | null | We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1).
In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index

| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6570/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6569/comments | https://api.github.com/repos/huggingface/datasets/issues/6569/events | https://github.com/huggingface/datasets/issues/6569 | 2,070,251,122 | I_kwDODunzps57ZYZy | 6,569 | WebDataset ignores features defined in YAML or passed to load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,704 | 1970-01-01T00:00:00.000001 | MEMBER | null | we should not override if the features exist already
https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L85 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6569/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6568/comments | https://api.github.com/repos/huggingface/datasets/issues/6568/events | https://github.com/huggingface/datasets/issues/6568 | 2,069,922,151 | I_kwDODunzps57YIFn | 6,568 | keep_in_memory=True does not seem to work | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Seems like I just used the old code which did not have `keep_in_memory=True` argument, sorry.\r\n\r\nAlthough i encountered a different problem – at 97% my python process just hung for around 11 minutes with no logs (when running dataset.map without `keep_in_memory=True` over around 3 million of dataset samples)...",
"Can you open a new issue and provide a bit more details ? What kind of map operations did you run ?",
"Hey. I will try to find some free time to describe it.\r\n\r\n(can't do it now, cause i need to reproduce it myself to be sure about everything, which requires spinning a new Azuree VM, copying a huge dataset to drive from network disk for a long time etc...)",
"@lhoestq loading dataset like this does not spawn 50 python processes:\r\n\r\n```\r\ndatasets.load_dataset(\"/preprocessed_2256k/train\", num_proc=50)\r\n```\r\n\r\nI have 64 vCPU so i hoped it could speed up the dataset loading...\r\n\r\nMy dataset onlly has images and metadata.csv with text column alongside image file path column",
"now noticed\r\n```\r\n'Setting num_proc from 50 back to 1 for the train split to disable multiprocessing as it only contains one shard\r\n```\r\n\r\nAny way to work around this?",
"@lhoestq thanks, [this helped](https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/arrow_dataset.py#L1053)\r\n\r\n"
] | 1970-01-01T00:00:00.000001 | 1,705 | null | NONE | null | UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :( | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6568/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6568/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6567/comments | https://api.github.com/repos/huggingface/datasets/issues/6567/events | https://github.com/huggingface/datasets/issues/6567 | 2,069,808,842 | I_kwDODunzps57XsbK | 6,567 | AttributeError: 'str' object has no attribute 'to' | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andysingal",
"id": 20493493,
"login": "andysingal",
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"repos_url": "https://api.github.com/users/andysingal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andysingal",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I think you are reporting an issue with the `transformers` library. Note this is the repository of the `datasets` library. I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues\r\n\r\nEDIT: I have not the rights to transfer the issue\r\n~~I am transferring your issue to their repository.~~",
"Thanks, I hope someone from transformers library addresses this issue.\r\n\r\nOn Mon, Jan 8, 2024 at 15:29 Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> I think you are reporting an issue with the transformers library. Note\r\n> this is the repository of the datasets library. I am transferring your\r\n> issue to their repository.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6567#issuecomment-1880688586>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNOYMD6WJMXFKPMH6DLYNO7PJAVCNFSM6AAAAABBQ63HWOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBQGY4DQNJYGY>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"@andysingal, I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues\r\nI don't have the rights to transfer this issue to their repo."
] | 1970-01-01T00:00:00.000001 | 1,704 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
8 report_to="wandb")
9
---> 10 trainer = Trainer(
11 model=model,
12 args=training_args,
1 frames
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _move_model_to_device(self, model, device)
688
689 def _move_model_to_device(self, model, device):
--> 690 model = model.to(device)
691 # Moving a model to an XLA device disconnects the tied weights, so we have to retie them.
692 if self.args.parallel_mode == ParallelMode.TPU and hasattr(model, "tie_weights"):
AttributeError: 'str' object has no attribute 'to'
```
### Steps to reproduce the bug
here is the notebook:
```
https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing
```
### Expected behavior
run the Training
### Environment info
Colab Notebook , T4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6567/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6566/comments | https://api.github.com/repos/huggingface/datasets/issues/6566/events | https://github.com/huggingface/datasets/issues/6566 | 2,069,495,429 | I_kwDODunzps57Wf6F | 6,566 | I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/25008090?v=4",
"events_url": "https://api.github.com/users/HelloWorldBeginner/events{/privacy}",
"followers_url": "https://api.github.com/users/HelloWorldBeginner/followers",
"following_url": "https://api.github.com/users/HelloWorldBeginner/following{/other_user}",
"gists_url": "https://api.github.com/users/HelloWorldBeginner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HelloWorldBeginner",
"id": 25008090,
"login": "HelloWorldBeginner",
"node_id": "MDQ6VXNlcjI1MDA4MDkw",
"organizations_url": "https://api.github.com/users/HelloWorldBeginner/orgs",
"received_events_url": "https://api.github.com/users/HelloWorldBeginner/received_events",
"repos_url": "https://api.github.com/users/HelloWorldBeginner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HelloWorldBeginner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HelloWorldBeginner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HelloWorldBeginner",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"I also see the same error and get passed it by casting that line to float. \r\n\r\nso `for x in obj.detach().cpu().numpy()` becomes `for x in obj.detach().to(torch.float).cpu().numpy()`\r\n\r\nI got the idea from [this ](https://github.com/kohya-ss/sd-webui-additional-networks/pull/128/files) PR where someone was facing a similar issue (in a different repository). I guess numpy doesn't support bfloat16.\r\n\r\n"
] | 1970-01-01T00:00:00.000001 | 1,717 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
```
Traceback (most recent call last):
File "train_controlnet_sdxl.py", line 1252, in <module>
main(args)
File "train_controlnet_sdxl.py", line 1013, in main
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single
writer.write_batch(batch)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 557, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 248, in pyarrow.lib.array
File "pyarrow/array.pxi", line 113, in pyarrow.lib._handle_arrow_array_protocol
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 191, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 447, in cast_to_python_objects
return _cast_to_python_objects(
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 324, in _cast_to_python_objects
for x in obj.detach().cpu().numpy()
TypeError: Got unsupported ScalarType BFloat16
```
### Steps to reproduce the bug
Here is my train script I use BF16 type,I use diffusers train my model
```
export MODEL_DIR="/home/mhh/sd_models/stable-diffusion-xl-base-1.0"
export OUTPUT_DIR="./control_net"
export VAE_NAME="/home/mhh/sd_models/sdxl-vae-fp16-fix"
accelerate launch train_controlnet_sdxl.py \
--pretrained_model_name_or_path=$MODEL_DIR \
--output_dir=$OUTPUT_DIR \
--pretrained_vae_model_name_or_path=$VAE_NAME \
--dataset_name=/home/mhh/sd_datasets/fusing/fill50k \
--mixed_precision="bf16" \
--resolution=1024 \
--learning_rate=1e-5 \
--max_train_steps=200 \
--validation_image "/home/mhh/sd_datasets/controlnet_image/conditioning_image_1.png" "/home/mhh/sd_datasets/controlnet_image/conditioning_image_2.png" \
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
--validation_steps=50 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--report_to="wandb" \
--seed=42 \
```
### Expected behavior
When I changed the data type to fp16, it worked.
### Environment info
datasets 2.16.1
numpy 1.24.4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6566/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6566/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6565/comments | https://api.github.com/repos/huggingface/datasets/issues/6565/events | https://github.com/huggingface/datasets/issues/6565 | 2,068,939,670 | I_kwDODunzps57UYOW | 6,565 | `drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader | {
"avatar_url": "https://avatars.githubusercontent.com/u/12119806?v=4",
"events_url": "https://api.github.com/users/naba89/events{/privacy}",
"followers_url": "https://api.github.com/users/naba89/followers",
"following_url": "https://api.github.com/users/naba89/following{/other_user}",
"gists_url": "https://api.github.com/users/naba89/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/naba89",
"id": 12119806,
"login": "naba89",
"node_id": "MDQ6VXNlcjEyMTE5ODA2",
"organizations_url": "https://api.github.com/users/naba89/orgs",
"received_events_url": "https://api.github.com/users/naba89/received_events",
"repos_url": "https://api.github.com/users/naba89/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/naba89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naba89/subscriptions",
"type": "User",
"url": "https://api.github.com/users/naba89",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"My current workaround this issue is to return `None` in the second element and then filter out samples which have `None` in them.\r\n\r\n```python\r\ndef merge_samples(batch):\r\n if len(batch['a']) == 1:\r\n batch['c'] = [batch['a'][0]]\r\n batch['d'] = [None]\r\n else:\r\n batch['c'] = [batch['a'][0]]\r\n batch['d'] = [batch['a'][1]]\r\n return batch\r\n \r\ndef filter_fn(x):\r\n return x['d'] is not None\r\n\r\n# other code...\r\nmapped = mapped.filter(filter_fn)\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,704 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Scenario:
- Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't have two samples.
What works:
- Using DataLoader with `num_workers=0`
What does not work:
- Using DataLoader with `num_workers=1`, errors in the last batch.
Basically, `drop_last_batch=True` is ignored when using multiple dataloading workers.
Please take a look at the minimal repro script below.
### Steps to reproduce the bug
```python
from datasets import Dataset, interleave_datasets
from torch.utils.data import DataLoader
def merge_samples(batch):
assert len(batch['a']) == 2, "Batch size must be 2"
batch['c'] = [batch['a'][0]]
batch['d'] = [batch['a'][1]]
return batch
def gen1():
for ii in range(1, 8385):
yield {"a": ii}
def gen2():
for ii in range(1, 5302):
yield {"a": ii}
if __name__ == '__main__':
dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=1024)
dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=1024)
interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy="all_exhausted")
mapped = interleaved.map(merge_samples, batched=True, batch_size=2, remove_columns=interleaved.column_names,
drop_last_batch=True)
# Works
loader = DataLoader(mapped, batch_size=32, num_workers=0)
i = 0
for b in loader:
print(i, b['c'].shape, b['d'].shape)
i += 1
print("DataLoader with num_workers=0 works")
# Doesn't work
loader = DataLoader(mapped, batch_size=32, num_workers=1)
i = 0
for b in loader:
print(i, b['c'].shape, b['d'].shape)
i += 1
```
### Expected behavior
`drop_last_batch=True` should have same behaviour for `num_workers=0` and `num_workers>=1`
### Environment info
- `datasets` version: 2.16.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.2
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.6.0
I have also tested on Linux and got the same behavior. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6565/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6565/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6564/comments | https://api.github.com/repos/huggingface/datasets/issues/6564/events | https://github.com/huggingface/datasets/issues/6564 | 2,068,893,194 | I_kwDODunzps57UM4K | 6,564 | `Dataset.filter` missing `with_rank` parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix",
"@mariosasko thank you very much :)"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The issue shall be open: https://github.com/huggingface/datasets/issues/6435
When i try to pass `with_rank` to `Dataset.filter()`, i get this:
`Dataset.filter() got an unexpected keyword argument 'with_rank'`
### Steps to reproduce the bug
Run notebook:
https://colab.research.google.com/drive/1WUNKph8BdP0on5ve3gQnh_PE0cFLQqTn?usp=sharing
### Expected behavior
Should work?
### Environment info
NVIDIA RTX 4090 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6564/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6564/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6563/comments | https://api.github.com/repos/huggingface/datasets/issues/6563/events | https://github.com/huggingface/datasets/issues/6563 | 2,068,302,402 | I_kwDODunzps57R8pC | 6,563 | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py) | {
"avatar_url": "https://avatars.githubusercontent.com/u/79070834?v=4",
"events_url": "https://api.github.com/users/wasertech/events{/privacy}",
"followers_url": "https://api.github.com/users/wasertech/followers",
"following_url": "https://api.github.com/users/wasertech/following{/other_user}",
"gists_url": "https://api.github.com/users/wasertech/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wasertech",
"id": 79070834,
"login": "wasertech",
"node_id": "MDQ6VXNlcjc5MDcwODM0",
"organizations_url": "https://api.github.com/users/wasertech/orgs",
"received_events_url": "https://api.github.com/users/wasertech/received_events",
"repos_url": "https://api.github.com/users/wasertech/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wasertech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasertech/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wasertech",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"@Wauplin Do you happen to know what's up?",
"<del>Installing `datasets` from `main` did the trick so I guess it will be fixed in the next release.\r\n\r\nNVM https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/utils/info_utils.py#L5",
"@wasertech upgrading `huggingface_hub` to a newer version should fix your issue. Latest version is 0.20.2. ",
"Ha yes I had pinned `tokenizers` to an old version so it downgraded `huggingface_hub`. Note to myself keep HuggingFace modules relatively close together chronologically release wise.",
"Glad to know your problem's solved! ",
"@Wauplin Thanks for your insight 👍",
"pip install --upgrade huggingface-hub"
] | 1970-01-01T00:00:00.000001 | 1,710 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_size 4 --num_train_epochs 1 --learning_rate 1.41e-5 --gradient_accumulation_steps 8 --seq_length 4096 --output_dir output --log_with wandb
Traceback (most recent call last):
File "/home/trainer/sft_train.py", line 22, in <module>
from datasets import load_dataset
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 66, in <module>
from .arrow_reader import ArrowReader
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_reader.py", line 30, in <module>
from .download.download_config import DownloadConfig
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/download_manager.py", line 31, in <module>
from ..utils import tqdm as hf_tqdm
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/__init__.py", line 19, in <module>
from .info_utils import VerificationMode
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 5, in <module>
from huggingface_hub.utils import insecure_hashlib
ImportError: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (/home/trainer/llm-train/lib/python3.8/site-packages/huggingface_hub/utils/__init__.py)
```
### Steps to reproduce the bug
Using `datasets==2.16.1` and `huggingface_hub== 0.17.3`, load a dataset with `load_dataset`.
### Expected behavior
The dataset should be (downloaded - if needed - and) returned.
### Environment info
```text
trainer@a311ae86939e:/mnt$ pip show datasets
Name: datasets
Version: 2.16.1
Summary: HuggingFace community-driven open-source library of datasets
Home-page: https://github.com/huggingface/datasets
Author: HuggingFace Inc.
Author-email: [email protected]
License: Apache 2.0
Location: /home/trainer/llm-train/lib/python3.8/site-packages
Requires: packaging, pyyaml, multiprocess, pyarrow-hotfix, pandas, pyarrow, xxhash, dill, numpy, aiohttp, tqdm, fsspec, requests, filelock, huggingface-hub
Required-by: trl, lm-eval, evaluate
trainer@a311ae86939e:/mnt$ pip show huggingface_hub
Name: huggingface-hub
Version: 0.17.3
Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub
Home-page: https://github.com/huggingface/huggingface_hub
Author: Hugging Face, Inc.
Author-email: [email protected]
License: Apache
Location: /home/trainer/llm-train/lib/python3.8/site-packages
Requires: requests, pyyaml, packaging, typing-extensions, tqdm, filelock, fsspec
Required-by: transformers, tokenizers, peft, evaluate, datasets, accelerate
trainer@a311ae86939e:/mnt$ huggingface-cli env
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.17.3
- Platform: Linux-6.5.13-7-MANJARO-x86_64-with-glibc2.29
- Python version: 3.8.10
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/trainer/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: wasertech
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.1.2
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 10.2.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.24.4
- pydantic: N/A
- aiohttp: 3.9.1
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/trainer/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/trainer/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/trainer/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/79070834?v=4",
"events_url": "https://api.github.com/users/wasertech/events{/privacy}",
"followers_url": "https://api.github.com/users/wasertech/followers",
"following_url": "https://api.github.com/users/wasertech/following{/other_user}",
"gists_url": "https://api.github.com/users/wasertech/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wasertech",
"id": 79070834,
"login": "wasertech",
"node_id": "MDQ6VXNlcjc5MDcwODM0",
"organizations_url": "https://api.github.com/users/wasertech/orgs",
"received_events_url": "https://api.github.com/users/wasertech/received_events",
"repos_url": "https://api.github.com/users/wasertech/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wasertech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasertech/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wasertech",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6563/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6562/comments | https://api.github.com/repos/huggingface/datasets/issues/6562/events | https://github.com/huggingface/datasets/issues/6562 | 2,067,904,504 | I_kwDODunzps57Qbf4 | 6,562 | datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function | {
"avatar_url": "https://avatars.githubusercontent.com/u/73234162?v=4",
"events_url": "https://api.github.com/users/LsTam91/events{/privacy}",
"followers_url": "https://api.github.com/users/LsTam91/followers",
"following_url": "https://api.github.com/users/LsTam91/following{/other_user}",
"gists_url": "https://api.github.com/users/LsTam91/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LsTam91",
"id": 73234162,
"login": "LsTam91",
"node_id": "MDQ6VXNlcjczMjM0MTYy",
"organizations_url": "https://api.github.com/users/LsTam91/orgs",
"received_events_url": "https://api.github.com/users/LsTam91/received_events",
"repos_url": "https://api.github.com/users/LsTam91/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LsTam91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LsTam91/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LsTam91",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,704 | null | NONE | null | ### Describe the bug
I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow).
Seems that the load_dataset function still use the old features schema instead of downloading everything new from the HUB.
I find a way to go around this issue by manually deleting the old dataset cache. But from my understanding of `datasets.DownloadMode.FORCE_REDOWNLOAD` option, the dataset cache should be ignored.
### Steps to reproduce the bug
1. Download your dataset in your machine using `datasets.load_dataset`
2. Create a new feature in your dataset and push it to the hub
3. On the same machine redownload your dataset using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)`
### Expected behavior
`
ValueError: Couldn't cast
id: string
level: string
context: list<element: string>
child 0, element: string
type: string
answer: string
question: string
supporting_facts: list<element: string>
child 0, element: string
fra_answer: string
fra_question: string
-- schema metadata --
huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 490
to
{'id': Value(dtype='string', id=None), 'level': Value(dtype='string', id=None), 'context': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'type': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'supporting_facts': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError
...
DatasetGenerationError: An error occurred while generating the dataset`
### Environment info
datasets-2.16.1 huggingface-hub-0.20.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6562/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6562/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6561/comments | https://api.github.com/repos/huggingface/datasets/issues/6561/events | https://github.com/huggingface/datasets/issues/6561 | 2,067,404,951 | I_kwDODunzps57OhiX | 6,561 | Document YAML configuration with "data_dir" | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | open | false | null | [] | null | [
"In particular, I would like to have an example of how to replace the following configuration (from https://huggingface.co/docs/hub/datasets-manual-configuration#splits)\r\n\r\n```\r\n---\r\nconfigs:\r\n- config_name: default\r\n data_files:\r\n - split: train\r\n path: \"data/*.csv\"\r\n - split: test\r\n path: \"holdout/*.csv\"\r\n---\r\n```\r\n\r\nwith the `data_dir` field."
] | 1970-01-01T00:00:00.000001 | 1,704 | null | COLLABORATOR | null | See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6561/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6560/comments | https://api.github.com/repos/huggingface/datasets/issues/6560/events | https://github.com/huggingface/datasets/issues/6560 | 2,065,637,625 | I_kwDODunzps57HyD5 | 6,560 | Support Video | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain",
"user_view_type": "public"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"duplicate of #5225"
] | 1970-01-01T00:00:00.000001 | 1,724 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :)
### Motivation
Video generation :)
### Your contribution
Will probably be limited to raising this feature request ;) | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6560/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6560/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6559/comments | https://api.github.com/repos/huggingface/datasets/issues/6559/events | https://github.com/huggingface/datasets/issues/6559 | 2,065,118,332 | I_kwDODunzps57FzR8 | 6,559 | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default'] | {
"avatar_url": "https://avatars.githubusercontent.com/u/145004780?v=4",
"events_url": "https://api.github.com/users/zhulinJulia24/events{/privacy}",
"followers_url": "https://api.github.com/users/zhulinJulia24/followers",
"following_url": "https://api.github.com/users/zhulinJulia24/following{/other_user}",
"gists_url": "https://api.github.com/users/zhulinJulia24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhulinJulia24",
"id": 145004780,
"login": "zhulinJulia24",
"node_id": "U_kgDOCKSY7A",
"organizations_url": "https://api.github.com/users/zhulinJulia24/orgs",
"received_events_url": "https://api.github.com/users/zhulinJulia24/received_events",
"repos_url": "https://api.github.com/users/zhulinJulia24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhulinJulia24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhulinJulia24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhulinJulia24",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n\r\nYou can load it this way instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ncache_dir = 'path/to/your/cache/directory'\r\ndataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n```",
"> Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n> \r\n> You can load it this way instead:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> cache_dir = 'path/to/your/cache/directory'\r\n> dataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n> ```\r\n\r\nthanks, the command run successfully in the latest version\r\n",
"> Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n> \r\n> You can load it this way instead:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> cache_dir = 'path/to/your/cache/directory'\r\n> dataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', cache_dir=cache_dir)\r\n> ```\r\n\r\n@lhoestq \r\nIn this case, should we traverse through al 1024 json files to load the whole dataset?\r\nThanks!",
"It will only load the first file (`data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}` only mentions one file)",
"> It will only load the first file (`data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}` only mentions one file)\r\n\r\nThen what if we want to load the whole dataset?",
"There is a \"en\" subset that you can load (see the list in the \"subset\" dropdown at https://huggingface.co/datasets/allenai/c4)\r\n\r\n```python\r\ndataset = load_dataset('allenai/c4', 'en', split=\"train\")\r\n```\r\n\r\nalternatively you can specify all the the files yourself using a glob pattern (or a list):\r\n\r\n```python\r\ndataset = load_dataset('allenai/c4', data_files='en/c4-train.00000-of-*.json.gz', split=\"train\")\r\n```",
"> There is a \"en\" subset that you can load (see the list in the \"subset\" dropdown at https://huggingface.co/datasets/allenai/c4)\r\n> \r\n> ```python\r\n> dataset = load_dataset('allenai/c4', 'en', split=\"train\")\r\n> ```\r\n> \r\n> alternatively you can specify all the the files yourself using a glob pattern (or a list):\r\n> \r\n> ```python\r\n> dataset = load_dataset('allenai/c4', data_files='en/c4-train.00000-of-*.json.gz', split=\"train\")\r\n> ```\r\n\r\nThanks, the second solution works. The first line simply fails due to missing schema specific to this dataset.",
"The latest version of `datasets` seems to have broken my dataset for my users (see this Hugging Face issue: https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/discussions/3). I changed it by renaming my dataset's config to `default` instead of `train` and then updating my dataset card accordingly."
] | 1970-01-01T00:00:00.000001 | 1,712 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
the script success when datasets version is 2.14.7.
when using 2.16.1, error occurs
`
ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']`
### Steps to reproduce the bug
1. pip install datasets==2.16.1
2. run python script:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
### Expected behavior
the dataset should be loaded successful in the latest version.
### Environment info
datasets 2.16.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/145004780?v=4",
"events_url": "https://api.github.com/users/zhulinJulia24/events{/privacy}",
"followers_url": "https://api.github.com/users/zhulinJulia24/followers",
"following_url": "https://api.github.com/users/zhulinJulia24/following{/other_user}",
"gists_url": "https://api.github.com/users/zhulinJulia24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhulinJulia24",
"id": 145004780,
"login": "zhulinJulia24",
"node_id": "U_kgDOCKSY7A",
"organizations_url": "https://api.github.com/users/zhulinJulia24/orgs",
"received_events_url": "https://api.github.com/users/zhulinJulia24/received_events",
"repos_url": "https://api.github.com/users/zhulinJulia24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhulinJulia24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhulinJulia24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhulinJulia24",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6559/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6559/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6558/comments | https://api.github.com/repos/huggingface/datasets/issues/6558/events | https://github.com/huggingface/datasets/issues/6558 | 2,064,885,984 | I_kwDODunzps57E6jg | 6,558 | OSError: image file is truncated (1 bytes not processed) #28323 | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andysingal",
"id": 20493493,
"login": "andysingal",
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"repos_url": "https://api.github.com/users/andysingal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andysingal",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"You can add \r\n\r\n```python\r\nfrom PIL import ImageFile\r\nImageFile.LOAD_TRUNCATED_IMAGES = True\r\n```\r\n\r\nafter the imports to be able to read truncated images."
] | 1970-01-01T00:00:00.000001 | 1,708 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[24], line 28
23 return example
25 # Filter the dataset
26 # filtered_dataset = dataset.filter(contains_number)
27 # Add the 'label' field in the dataset
---> 28 labeled_dataset = dataset.filter(contains_number).map(add_label)
29 # View the structure of the updated dataset
30 print(labeled_dataset)
File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:975, in DatasetDict.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)
972 if cache_file_names is None:
973 cache_file_names = {k: None for k in self}
974 return DatasetDict(
--> 975 {
976 k: dataset.filter(
977 function=function,
978 with_indices=with_indices,
979 input_columns=input_columns,
980 batched=batched,
981 batch_size=batch_size,
982 keep_in_memory=keep_in_memory,
983 load_from_cache_file=load_from_cache_file,
984 cache_file_name=cache_file_names[k],
985 writer_batch_size=writer_batch_size,
986 fn_kwargs=fn_kwargs,
987 num_proc=num_proc,
988 desc=desc,
989 )
990 for k, dataset in self.items()
991 }
992 )
File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:976, in <dictcomp>(.0)
972 if cache_file_names is None:
973 cache_file_names = {k: None for k in self}
974 return DatasetDict(
975 {
--> 976 k: dataset.filter(
977 function=function,
978 with_indices=with_indices,
979 input_columns=input_columns,
980 batched=batched,
981 batch_size=batch_size,
982 keep_in_memory=keep_in_memory,
983 load_from_cache_file=load_from_cache_file,
984 cache_file_name=cache_file_names[k],
985 writer_batch_size=writer_batch_size,
986 fn_kwargs=fn_kwargs,
987 num_proc=num_proc,
988 desc=desc,
989 )
990 for k, dataset in self.items()
991 }
992 )
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs)
550 self_format = {
551 "type": self._format_type,
552 "format_kwargs": self._format_kwargs,
553 "columns": self._format_columns,
554 "output_all_columns": self._output_all_columns,
555 }
556 # apply actual function
--> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
559 # re-apply format to the output
File /usr/local/lib/python3.10/dist-packages/datasets/fingerprint.py:481, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
477 validate_fingerprint(kwargs[fingerprint_name])
479 # Call actual function
--> 481 out = func(dataset, *args, **kwargs)
483 # Update fingerprint of in-place transforms + update in-place history of transforms
485 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3623, in Dataset.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
3620 if len(self) == 0:
3621 return self
-> 3623 indices = self.map(
3624 function=partial(
3625 get_indices_from_mask_function, function, batched, with_indices, input_columns, self._indices
3626 ),
3627 with_indices=True,
3628 features=Features({"indices": Value("uint64")}),
3629 batched=True,
3630 batch_size=batch_size,
3631 remove_columns=self.column_names,
3632 keep_in_memory=keep_in_memory,
3633 load_from_cache_file=load_from_cache_file,
3634 cache_file_name=cache_file_name,
3635 writer_batch_size=writer_batch_size,
3636 fn_kwargs=fn_kwargs,
3637 num_proc=num_proc,
3638 suffix_template=suffix_template,
3639 new_fingerprint=new_fingerprint,
3640 input_columns=input_columns,
3641 desc=desc or "Filter",
3642 )
3643 new_dataset = copy.deepcopy(self)
3644 new_dataset._indices = indices.data
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
590 self: "Dataset" = kwargs.pop("self")
591 # apply actual function
--> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
594 for dataset in datasets:
595 # Remove task templates if a column mapping of the template is no longer valid
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs)
550 self_format = {
551 "type": self._format_type,
552 "format_kwargs": self._format_kwargs,
553 "columns": self._format_columns,
554 "output_all_columns": self._output_all_columns,
555 }
556 # apply actual function
--> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
559 # re-apply format to the output
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3093, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
3087 if transformed_dataset is None:
3088 with hf_tqdm(
3089 unit=" examples",
3090 total=pbar_total,
3091 desc=desc or "Map",
3092 ) as pbar:
-> 3093 for rank, done, content in Dataset._map_single(**dataset_kwargs):
3094 if done:
3095 shards_done += 1
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3470, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
3466 indices = list(
3467 range(*(slice(i, i + batch_size).indices(shard.num_rows)))
3468 ) # Something simpler?
3469 try:
-> 3470 batch = apply_function_on_filtered_inputs(
3471 batch,
3472 indices,
3473 check_same_num_examples=len(shard.list_indexes()) > 0,
3474 offset=offset,
3475 )
3476 except NumExamplesMismatchError:
3477 raise DatasetTransformationNotAllowedError(
3478 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."
3479 ) from None
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3349, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset)
3347 if with_rank:
3348 additional_args += (rank,)
-> 3349 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
3350 if isinstance(processed_inputs, LazyDict):
3351 processed_inputs = {
3352 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format
3353 }
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:6212, in get_indices_from_mask_function(function, batched, with_indices, input_columns, indices_mapping, *args, **fn_kwargs)
6209 if input_columns is None:
6210 # inputs only contains a batch of examples
6211 batch: dict = inputs[0]
-> 6212 num_examples = len(batch[next(iter(batch.keys()))])
6213 for i in range(num_examples):
6214 example = {key: batch[key][i] for key in batch}
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:272, in LazyDict.__getitem__(self, key)
270 value = self.data[key]
271 if key in self.keys_to_format:
--> 272 value = self.format(key)
273 self.data[key] = value
274 self.keys_to_format.remove(key)
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:375, in LazyBatch.format(self, key)
374 def format(self, key):
--> 375 return self.formatter.format_column(self.pa_table.select([key]))
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:442, in PythonFormatter.format_column(self, pa_table)
440 def format_column(self, pa_table: pa.Table) -> list:
441 column = self.python_arrow_extractor().extract_column(pa_table)
--> 442 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
443 return column
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:218, in PythonFeaturesDecoder.decode_column(self, column, column_name)
217 def decode_column(self, column: list, column_name: str) -> list:
--> 218 return self.features.decode_column(column, column_name) if self.features else column
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in Features.decode_column(self, column, column_name)
1938 def decode_column(self, column: list, column_name: str):
1939 """Decode column with custom feature decoding.
1940
1941 Args:
(...)
1948 `list[Any]`
1949 """
1950 return (
-> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
1952 if self._column_requires_decoding[column_name]
1953 else column
1954 )
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in <listcomp>(.0)
1938 def decode_column(self, column: list, column_name: str):
1939 """Decode column with custom feature decoding.
1940
1941 Args:
(...)
1948 `list[Any]`
1949 """
1950 return (
-> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
1952 if self._column_requires_decoding[column_name]
1953 else column
1954 )
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id)
1336 elif isinstance(schema, (Audio, Image)):
1337 # we pass the token to read and decode files from private repositories in streaming mode
1338 if obj is not None and schema.decode:
-> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1340 return obj
File /usr/local/lib/python3.10/dist-packages/datasets/features/image.py:185, in Image.decode_example(self, value, token_per_repo_id)
183 else:
184 image = PIL.Image.open(BytesIO(bytes_))
--> 185 image.load() # to avoid "Too many open files" errors
186 return image
File /usr/local/lib/python3.10/dist-packages/PIL/ImageFile.py:254, in ImageFile.load(self)
252 break
253 else:
--> 254 raise OSError(
255 "image file is truncated "
256 f"({len(b)} bytes not processed)"
257 )
259 b = b + s
260 n, err_code = decoder.decode(b)
OSError: image file is truncated (1 bytes not processed)
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("mehul7/captioned_military_aircraft")
from transformers import AutoImageProcessor
checkpoint = "microsoft/resnet-50"
image_processor = AutoImageProcessor.from_pretrained(checkpoint)
import re
from PIL import Image
import io
def contains_number(example):
try:
image = Image.open(io.BytesIO(example["image"]['bytes']))
t = image_processor(images=image, return_tensors="pt")['pixel_values']
except Exception as e:
print(f"Error processing image:{example['text']}")
return False
return bool(re.search(r'\d', example['text']))
# Define a function to add the 'label' field
def add_label(example):
lab = example['text'].split()
temp = 'NOT'
for item in lab:
if str(item[-1]).isdigit():
temp = item
break
example['label'] = temp
return example
# Filter the dataset
# filtered_dataset = dataset.filter(contains_number)
# Add the 'label' field in the dataset
labeled_dataset = dataset.filter(contains_number).map(add_label)
# View the structure of the updated dataset
print(labeled_dataset)
```
### Expected behavior
needs to form labels
same as : https://www.kaggle.com/code/jiabaowangts/dataset-air/notebook
### Environment info
Kaggle notebook P100 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6558/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6558/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6554/comments | https://api.github.com/repos/huggingface/datasets/issues/6554/events | https://github.com/huggingface/datasets/issues/6554 | 2,063,839,916 | I_kwDODunzps57A7Ks | 6,554 | Parquet exports are used even if revision is passed | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"I don't think this bug is a thing ? Do you have some code that leads to this issue ?"
] | 1970-01-01T00:00:00.000001 | 1,706 | 1970-01-01T00:00:00.000001 | MEMBER | null | We should not used Parquet exports if `revision` is passed.
I think this is a regression. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6554/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6554/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6553/comments | https://api.github.com/repos/huggingface/datasets/issues/6553/events | https://github.com/huggingface/datasets/issues/6553 | 2,063,474,183 | I_kwDODunzps56_h4H | 6,553 | Cannot import name 'load_dataset' from .... module ‘datasets’ | {
"avatar_url": "https://avatars.githubusercontent.com/u/83450192?v=4",
"events_url": "https://api.github.com/users/ciaoyizhen/events{/privacy}",
"followers_url": "https://api.github.com/users/ciaoyizhen/followers",
"following_url": "https://api.github.com/users/ciaoyizhen/following{/other_user}",
"gists_url": "https://api.github.com/users/ciaoyizhen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ciaoyizhen",
"id": 83450192,
"login": "ciaoyizhen",
"node_id": "MDQ6VXNlcjgzNDUwMTky",
"organizations_url": "https://api.github.com/users/ciaoyizhen/orgs",
"received_events_url": "https://api.github.com/users/ciaoyizhen/received_events",
"repos_url": "https://api.github.com/users/ciaoyizhen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ciaoyizhen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ciaoyizhen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ciaoyizhen",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I don't know My conpany conputer cannot work. but in my computer, it work?",
"Do you have a folder in your working directory called datasets?"
] | 1970-01-01T00:00:00.000001 | 1,708 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
use python -m pip install datasets to install
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
it doesn't work
### Environment info
datasets version==2.15.0
python == 3.10.12
linux version I don't know?? | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6553/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6553/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6552/comments | https://api.github.com/repos/huggingface/datasets/issues/6552/events | https://github.com/huggingface/datasets/issues/6552 | 2,063,157,187 | I_kwDODunzps56-UfD | 6,552 | Loading a dataset from Google Colab hangs at "Resolving data files". | {
"avatar_url": "https://avatars.githubusercontent.com/u/99779?v=4",
"events_url": "https://api.github.com/users/KelSolaar/events{/privacy}",
"followers_url": "https://api.github.com/users/KelSolaar/followers",
"following_url": "https://api.github.com/users/KelSolaar/following{/other_user}",
"gists_url": "https://api.github.com/users/KelSolaar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KelSolaar",
"id": 99779,
"login": "KelSolaar",
"node_id": "MDQ6VXNlcjk5Nzc5",
"organizations_url": "https://api.github.com/users/KelSolaar/orgs",
"received_events_url": "https://api.github.com/users/KelSolaar/received_events",
"repos_url": "https://api.github.com/users/KelSolaar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KelSolaar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KelSolaar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KelSolaar",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This bug comes from the `huggingface_hub` library, see: https://github.com/huggingface/huggingface_hub/issues/1952\r\n\r\nA fix is provided at https://github.com/huggingface/huggingface_hub/pull/1953. Feel free to install `huggingface_hub` from this PR, or wait for it to be merged and the new version of `huggingface_hub` to be released",
"Thanks!"
] | 1970-01-01T00:00:00.000001 | 1,704 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Hello,
I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`:

It is happening when the `_get_origin_metadata` definition is invoked:
```python
def _get_origin_metadata(
data_files: List[str],
max_workers=64,
download_config: Optional[DownloadConfig] = None,
) -> Tuple[str]:
return thread_map(
partial(_get_single_origin_metadata, download_config=download_config),
data_files,
max_workers=max_workers,
tqdm_class=hf_tqdm,
desc="Resolving data files",
disable=len(data_files) <= 16,
```
The thread is then stuck at `waiter.acquire()` in the builtin `threading.py` file.
I can load the dataset just fine on my machine.
Cheers,
Thomas
### Steps to reproduce the bug
In Google Colab:
```python
!pip install datasets
from datasets import load_dataset
dataset = load_dataset("colour-science/color-checker-detection-dataset")
```
### Expected behavior
The dataset should be loaded.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.1
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6552/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6552/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6549/comments | https://api.github.com/repos/huggingface/datasets/issues/6549/events | https://github.com/huggingface/datasets/issues/6549 | 2,062,420,259 | I_kwDODunzps567gkj | 6,549 | Loading from hf hub with clearer error message | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Maybe we can add a helper message like `Maybe try again using \"hf://path/without/resolve\"` if the path contains `/resolve/` ?\r\n\r\ne.g.\r\n\r\n```\r\nFileNotFoundError: Unable to find 'hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json'\r\nIt looks like you used parts of the URL of the file from the Hugging Face website, but you should remove the \"/resolve/<revision>\" part to have a valid `hf://` path.\r\nPlease try again using this path instead:\r\n hf://datasets/HuggingFaceTB/eval_data/eval_data_context_and_answers.json\r\n```\r\n\r\nand suggest `f\"hf://datasets/HuggingFaceTB/eval_data@{revision}/eval_data_context_and_answers.json\"` if revision != \"main\"\r\n\r\nEDIT: I think this message should also be raised from the `huggingface_hub`'s `HfFileSystem` implementation"
] | 1970-01-01T00:00:00.000001 | 1,704 | null | MEMBER | null | ### Feature request
Shouldn't this kinda work ?
```
Dataset.from_json("hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json")
```
I got an error
```
File ~/miniconda3/envs/datatrove/lib/python3.10/site-packages/datasets/data_files.py:380, in resolve_pattern(pattern, base_path, allowed_extensions, download_config)
378 if allowed_extensions is not None:
379 error_msg += f" with any supported extension {list(allowed_extensions)}"
--> 380 raise FileNotFoundError(error_msg)
381 return out
FileNotFoundError: Unable to find 'hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json'
(I'm logged in)
```
Fix: the correct path is
```
hf://datasets/HuggingFaceTB/eval_data/eval_data_context_and_answers.json
```
Proposal: raise a clearer error
### Motivation
Clearer error message
### Your contribution
Can open a PR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6549/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6548/comments | https://api.github.com/repos/huggingface/datasets/issues/6548/events | https://github.com/huggingface/datasets/issues/6548 | 2,061,047,984 | I_kwDODunzps562Riw | 6,548 | Skip if a dataset has issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/143214684?v=4",
"events_url": "https://api.github.com/users/hadianasliwa/events{/privacy}",
"followers_url": "https://api.github.com/users/hadianasliwa/followers",
"following_url": "https://api.github.com/users/hadianasliwa/following{/other_user}",
"gists_url": "https://api.github.com/users/hadianasliwa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hadianasliwa",
"id": 143214684,
"login": "hadianasliwa",
"node_id": "U_kgDOCIlIXA",
"organizations_url": "https://api.github.com/users/hadianasliwa/orgs",
"received_events_url": "https://api.github.com/users/hadianasliwa/received_events",
"repos_url": "https://api.github.com/users/hadianasliwa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hadianasliwa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadianasliwa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hadianasliwa",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"It looks like a transient DNS issue. It should work fine now if you try again.\r\n\r\nThere is no parameter in load_dataset to skip failed downloads. In your case it would have skipped every single subsequent download until the DNS issue was resolved anyway."
] | 1970-01-01T00:00:00.000001 | 1,704 | null | NONE | null | ### Describe the bug
Hello everyone,
I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error:
Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10f96f67d609c5d442980dc9/20231101.ext/train-00000-of-00001.parquet
Failed to resolve \'huggingface.co\' ([Errno -3] Temporary failure in name resolution)"))')))

so I was wondering is there a parameter to be passed to load_dataset() to skip files that can't be downloaded??
### Steps to reproduce the bug
Parameter to be passed to load_dataset() of huggingface to skip files that can't be downloaded??
### Expected behavior
load_dataset() finishes without error
### Environment info
None | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6548/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6548/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6545/comments | https://api.github.com/repos/huggingface/datasets/issues/6545/events | https://github.com/huggingface/datasets/issues/6545 | 2,060,789,507 | I_kwDODunzps561ScD | 6,545 | `image` column not automatically inferred if image dataset only contains 1 image | {
"avatar_url": "https://avatars.githubusercontent.com/u/788417?v=4",
"events_url": "https://api.github.com/users/apolinario/events{/privacy}",
"followers_url": "https://api.github.com/users/apolinario/followers",
"following_url": "https://api.github.com/users/apolinario/following{/other_user}",
"gists_url": "https://api.github.com/users/apolinario/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/apolinario",
"id": 788417,
"login": "apolinario",
"node_id": "MDQ6VXNlcjc4ODQxNw==",
"organizations_url": "https://api.github.com/users/apolinario/orgs",
"received_events_url": "https://api.github.com/users/apolinario/received_events",
"repos_url": "https://api.github.com/users/apolinario/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/apolinario/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apolinario/subscriptions",
"type": "User",
"url": "https://api.github.com/users/apolinario",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,704 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
By default, the standard Image Dataset maps out `file_name` to `image` when loading an Image Dataset.
However, if the dataset contains only 1 image, this does not take place
### Steps to reproduce the bug
Input
(dataset with one image `multimodalart/repro_1_image`)
```py
from datasets import load_dataset
dataset = load_dataset("multimodalart/repro_1_image")
dataset
```
Output:
```py
DatasetDict({
train: Dataset({
features: ['file_name', 'prompt'],
num_rows: 1
})
})
```
Input
(dataset with 2+ images `multimodalart/repro_2_image`)
```py
from datasets import load_dataset
dataset = load_dataset("multimodalart/repro_2_image")
dataset
```
Output:
```py
DatasetDict({
train: Dataset({
features: ['image', 'prompt'],
num_rows: 2
})
})
```
### Expected behavior
Expected to map `file_name` → `image` for all dataset sizes, including 1.
### Environment info
Both latest main and 2.16.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6545/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6542/comments | https://api.github.com/repos/huggingface/datasets/issues/6542/events | https://github.com/huggingface/datasets/issues/6542 | 2,059,198,575 | I_kwDODunzps56vOBv | 6,542 | Datasets : wikipedia 20220301.en error | {
"avatar_url": "https://avatars.githubusercontent.com/u/53203620?v=4",
"events_url": "https://api.github.com/users/ppx666/events{/privacy}",
"followers_url": "https://api.github.com/users/ppx666/followers",
"following_url": "https://api.github.com/users/ppx666/following{/other_user}",
"gists_url": "https://api.github.com/users/ppx666/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ppx666",
"id": 53203620,
"login": "ppx666",
"node_id": "MDQ6VXNlcjUzMjAzNjIw",
"organizations_url": "https://api.github.com/users/ppx666/orgs",
"received_events_url": "https://api.github.com/users/ppx666/received_events",
"repos_url": "https://api.github.com/users/ppx666/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ppx666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ppx666/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ppx666",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! We now recommend using the `wikimedia/wikipedia` dataset, can you try loading this one instead ?\r\n\r\n```python\r\nwiki_dataset = load_dataset(\"wikimedia/wikipedia\", \"20231101.en\")\r\n```",
"This bug has been fixed in `2.16.1` thanks to https://github.com/huggingface/datasets/pull/6544, feel free to update `datasets` and re-run your code :)\r\n\r\n```\r\npip install -U datasets\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,704 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia", "20220301.en")
```
An exception occurred
```
MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')`
```
2.I modified the code as prompted.
```python
wiki_dataset = load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')
```
An exception occurred:
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
```
### Expected behavior
I searched in the parent directory of the corresponding URL, but there was no corresponding "20220301" directory.
I really need this data set and hope to provide a download method.
### Environment info
python 3.8
datasets 2.16.0
apache-beam 2.52.0
dill 0.3.7
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6542/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6542/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6541/comments | https://api.github.com/repos/huggingface/datasets/issues/6541/events | https://github.com/huggingface/datasets/issues/6541 | 2,058,983,826 | I_kwDODunzps56uZmS | 6,541 | Dataset not loading successfully. | {
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hi-sushanta",
"id": 93595990,
"login": "hi-sushanta",
"node_id": "U_kgDOBZQpVg",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hi-sushanta",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This is a problem with your environment. You should be able to fix it by upgrading `numpy` based on [this](https://github.com/numpy/numpy/issues/23570) issue.",
"Bro I already update numpy package.",
"Then, this shouldn't throw an error on your machine:\r\n```python\r\nimport numpy\r\nnumpy._no_nep50_warning\r\n```\r\n\r\nIf it does, run `python -m pip install numpy` to ensure the correct `pip` is used for the package installation.",
"Your suggestion to run `python -m pip install numpy` proved to be successful, and my issue has been resolved. I am grateful for your assistance, @mariosasko"
] | 1970-01-01T00:00:00.000001 | 1,705 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to reproduce the bug
## Reproduction
Hi, please check this line of code, when I run Show attribute error.
```
from datasets import load_dataset
from transformers import WhisperProcessor, WhisperForConditionalGeneration
# Select an audio file and read it:
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[0]["audio"]
waveform = audio_sample["array"]
sampling_rate = audio_sample["sampling_rate"]
# Load the Whisper model in Hugging Face format:
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
# Use the model and processor to transcribe the audio:
input_features = processor(
waveform, sampling_rate=sampling_rate, return_tensors="pt"
).input_features
# Generate token ids
predicted_ids = model.generate(input_features)
# Decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
transcription[0]
```
**Attribute Error**
```
AttributeError Traceback (most recent call last)
Cell In[9], line 6
4 # Select an audio file and read it:
5 ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
----> 6 audio_sample = ds[0]["audio"]
7 waveform = audio_sample["array"]
8 sampling_rate = audio_sample["sampling_rate"]
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2795, in Dataset.__getitem__(self, key)
2793 def __getitem__(self, key): # noqa: F811
2794 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2795 return self._getitem(key)
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2780, in Dataset._getitem(self, key, **kwargs)
2778 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs)
2779 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2780 formatted_output = format_table(
2781 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2782 )
2783 return formatted_output
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:629, in format_table(table, key, formatter, format_columns, output_all_columns)
627 python_formatter = PythonFormatter(features=formatter.features)
628 if format_columns is None:
--> 629 return formatter(pa_table, query_type=query_type)
630 elif query_type == "column":
631 if key in format_columns:
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:396, in Formatter.__call__(self, pa_table, query_type)
394 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
395 if query_type == "row":
--> 396 return self.format_row(pa_table)
397 elif query_type == "column":
398 return self.format_column(pa_table)
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:437, in PythonFormatter.format_row(self, pa_table)
435 return LazyRow(pa_table, self)
436 row = self.python_arrow_extractor().extract_row(pa_table)
--> 437 row = self.python_features_decoder.decode_row(row)
438 return row
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:215, in PythonFeaturesDecoder.decode_row(self, row)
214 def decode_row(self, row: dict) -> dict:
--> 215 return self.features.decode_example(row) if self.features else row
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1917, in Features.decode_example(self, example, token_per_repo_id)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
-> 1917 return {
1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1918, in <dictcomp>(.0)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
1917 return {
-> 1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id)
1336 elif isinstance(schema, (Audio, Image)):
1337 # we pass the token to read and decode files from private repositories in streaming mode
1338 if obj is not None and schema.decode:
-> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1340 return obj
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/audio.py:191, in Audio.decode_example(self, value, token_per_repo_id)
189 array = array.T
190 if self.mono:
--> 191 array = librosa.to_mono(array)
192 if self.sampling_rate and self.sampling_rate != sampling_rate:
193 array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:78, in attach.<locals>.__getattr__(name)
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
77 submod = importlib.import_module(submod_path)
---> 78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
83 if name == attr_to_modules[name]:
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:77, in attach.<locals>.__getattr__(name)
75 elif name in attr_to_modules:
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
---> 77 submod = importlib.import_module(submod_path)
78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
File /usr/lib/python3.8/importlib/__init__.py:127, in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1014, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:991, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:975, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:671, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:848, in exec_module(self, module)
File <frozen importlib._bootstrap>:219, in _call_with_frames_removed(f, *args, **kwds)
File /opt/pytorch/lib/python3.8/site-packages/librosa/core/audio.py:13
11 import audioread
12 import numpy as np
---> 13 import scipy.signal
14 import soxr
15 import lazy_loader as lazy
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/__init__.py:323
314 from ._spline import ( # noqa: F401
315 cspline2d,
316 qspline2d,
(...)
319 symiirorder2,
320 )
322 from ._bsplines import *
--> 323 from ._filter_design import *
324 from ._fir_filter_design import *
325 from ._ltisys import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/_filter_design.py:16
13 from numpy.polynomial.polynomial import polyval as npp_polyval
14 from numpy.polynomial.polynomial import polyvalfromroots
---> 16 from scipy import special, optimize, fft as sp_fft
17 from scipy.special import comb
18 from scipy._lib._util import float_factorial
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/__init__.py:405
1 """
2 =====================================================
3 Optimization and root finding (:mod:`scipy.optimize`)
(...)
401
402 """
404 from ._optimize import *
--> 405 from ._minimize import *
406 from ._root import *
407 from ._root_scalar import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_minimize.py:26
24 from ._trustregion_krylov import _minimize_trust_krylov
25 from ._trustregion_exact import _minimize_trustregion_exact
---> 26 from ._trustregion_constr import _minimize_trustregion_constr
28 # constrained minimization
29 from ._lbfgsb_py import _minimize_lbfgsb
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/__init__.py:4
1 """This module contains the equality constrained SQP solver."""
----> 4 from .minimize_trustregion_constr import _minimize_trustregion_constr
6 __all__ = ['_minimize_trustregion_constr']
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/minimize_trustregion_constr.py:5
3 from scipy.sparse.linalg import LinearOperator
4 from .._differentiable_functions import VectorFunction
----> 5 from .._constraints import (
6 NonlinearConstraint, LinearConstraint, PreparedConstraint, strict_bounds)
7 from .._hessian_update_strategy import BFGS
8 from .._optimize import OptimizeResult
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_constraints.py:8
6 from ._optimize import OptimizeWarning
7 from warnings import warn, catch_warnings, simplefilter
----> 8 from numpy.testing import suppress_warnings
9 from scipy.sparse import issparse
12 def _arr_to_scalar(x):
13 # If x is a numpy array, return x.item(). This will
14 # fail if the array has more than one element.
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/__init__.py:11
8 from unittest import TestCase
10 from . import _private
---> 11 from ._private.utils import *
12 from ._private.utils import (_assert_valid_refcount, _gen_alignment_data)
13 from ._private import extbuild, decorators as dec
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/_private/utils.py:480
476 pprint.pprint(desired, msg)
477 raise AssertionError(msg.getvalue())
--> 480 @np._no_nep50_warning()
481 def assert_almost_equal(actual,desired,decimal=7,err_msg='',verbose=True):
482 """
483 Raises an AssertionError if two items are not equal up to desired
484 precision.
(...)
548
549 """
550 __tracebackhide__ = True # Hide traceback for py.test
File /opt/pytorch/lib/python3.8/site-packages/numpy/__init__.py:313, in __getattr__(attr)
305 raise AttributeError(__former_attrs__[attr])
307 # Importing Tester requires importing all of UnitTest which is not a
308 # cheap import Since it is mainly used in test suits, we lazy import it
309 # here to save on the order of 10 ms of import time for most users
310 #
311 # The previous way Tester was imported also had a side effect of adding
312 # the full `numpy.testing` namespace
--> 313 if attr == 'testing':
314 import numpy.testing as testing
315 return testing
AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
```
### Expected behavior
``` ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ```
Also, make sure this script is provided for your official website so please update:
[script](https://huggingface.co/docs/transformers/model_doc/whisper)
### Environment info
**System Info**
* transformers -> 4.36.1
* datasets -> 2.15.0
* huggingface_hub -> 0.19.4
* python -> 3.8.10
* accelerate -> 0.25.0
* pytorch -> 2.0.1+cpu
* Using GPU in Script -> No
| {
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hi-sushanta",
"id": 93595990,
"login": "hi-sushanta",
"node_id": "U_kgDOBZQpVg",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hi-sushanta",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6541/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6541/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6540/comments | https://api.github.com/repos/huggingface/datasets/issues/6540/events | https://github.com/huggingface/datasets/issues/6540 | 2,058,965,157 | I_kwDODunzps56uVCl | 6,540 | Extreme inefficiency for `save_to_disk` when merging datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/43512683?v=4",
"events_url": "https://api.github.com/users/KatarinaYuan/events{/privacy}",
"followers_url": "https://api.github.com/users/KatarinaYuan/followers",
"following_url": "https://api.github.com/users/KatarinaYuan/following{/other_user}",
"gists_url": "https://api.github.com/users/KatarinaYuan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KatarinaYuan",
"id": 43512683,
"login": "KatarinaYuan",
"node_id": "MDQ6VXNlcjQzNTEyNjgz",
"organizations_url": "https://api.github.com/users/KatarinaYuan/orgs",
"received_events_url": "https://api.github.com/users/KatarinaYuan/received_events",
"repos_url": "https://api.github.com/users/KatarinaYuan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KatarinaYuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KatarinaYuan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KatarinaYuan",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Concatenating datasets doesn't create any indices mapping - so flattening indices is not needed (unless you shuffle the dataset).\r\nCan you share the snippet of code you are using to merge your datasets and save them to disk ?"
] | 1970-01-01T00:00:00.000001 | 1,703 | null | NONE | null | ### Describe the bug
Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have any suggestions or guidance on this. Thank you very much!
### Steps to reproduce the bug
The source data is too big to demonstrate
### Expected behavior
The source data is too big to demonstrate
### Environment info
python 3.9.0
datasets 2.7.0
pytorch 2.0.0
tokenizers 0.13.1
transformers 4.31.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6540/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6540/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6539/comments | https://api.github.com/repos/huggingface/datasets/issues/6539/events | https://github.com/huggingface/datasets/issues/6539 | 2,058,493,960 | I_kwDODunzps56siAI | 6,539 | 'Repo card metadata block was not found' when loading a pragmeval dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/3647577?v=4",
"events_url": "https://api.github.com/users/lambdaofgod/events{/privacy}",
"followers_url": "https://api.github.com/users/lambdaofgod/followers",
"following_url": "https://api.github.com/users/lambdaofgod/following{/other_user}",
"gists_url": "https://api.github.com/users/lambdaofgod/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lambdaofgod",
"id": 3647577,
"login": "lambdaofgod",
"node_id": "MDQ6VXNlcjM2NDc1Nzc=",
"organizations_url": "https://api.github.com/users/lambdaofgod/orgs",
"received_events_url": "https://api.github.com/users/lambdaofgod/received_events",
"repos_url": "https://api.github.com/users/lambdaofgod/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lambdaofgod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambdaofgod/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lambdaofgod",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,703 | null | NONE | null | ### Describe the bug
I can't load dataset subsets of 'pragmeval'.
The funny thing is I ran the dataset author's [colab notebook](https://colab.research.google.com/drive/1sg--LF4z7XR1wxAOfp0-3d4J6kQ9nj_A?usp=sharing) and it works just fine. I tried to install exactly the same packages that are installed on colab using poetry, so my environment info only differs from the one from colab in linux version - I still get the same bug outside colab.
### Steps to reproduce the bug
Install dependencies with poetry
pyproject.toml
```
[tool.poetry]
name = "project"
version = "0.1.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = "^3.10"
datasets = "2.16.0"
pandas = "1.5.3"
pyarrow = "10.0.1"
huggingface-hub = "0.19.4"
fsspec = "2023.6.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
`poetry run python -c "import datasets; print(datasets.get_dataset_config_names('pragmeval'))`
prints ['default']
### Expected behavior
The command should print
```
['emergent',
'emobank-arousal',
'emobank-dominance',
'emobank-valence',
'gum',
'mrda',
'pdtb',
'persuasiveness-claimtype',
'persuasiveness-eloquence',
'persuasiveness-premisetype',
'persuasiveness-relevance',
'persuasiveness-specificity',
'persuasiveness-strength',
'sarcasm',
'squinky-formality',
'squinky-implicature',
'squinky-informativeness',
'stac',
'switchboard',
'verifiability']
```
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6539/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6539/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6538/comments | https://api.github.com/repos/huggingface/datasets/issues/6538/events | https://github.com/huggingface/datasets/issues/6538 | 2,057,377,630 | I_kwDODunzps56oRde | 6,538 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | {
"avatar_url": "https://avatars.githubusercontent.com/u/131662185?v=4",
"events_url": "https://api.github.com/users/Sonali-Behera-TRT/events{/privacy}",
"followers_url": "https://api.github.com/users/Sonali-Behera-TRT/followers",
"following_url": "https://api.github.com/users/Sonali-Behera-TRT/following{/other_user}",
"gists_url": "https://api.github.com/users/Sonali-Behera-TRT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sonali-Behera-TRT",
"id": 131662185,
"login": "Sonali-Behera-TRT",
"node_id": "U_kgDOB9kBaQ",
"organizations_url": "https://api.github.com/users/Sonali-Behera-TRT/orgs",
"received_events_url": "https://api.github.com/users/Sonali-Behera-TRT/received_events",
"repos_url": "https://api.github.com/users/Sonali-Behera-TRT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sonali-Behera-TRT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sonali-Behera-TRT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sonali-Behera-TRT",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error",
"I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle?",
"I have the same issue now and didn't have this problem around 2 weeks ago.",
"> Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error\r\n\r\nYes, I am sure\r\n\r\n```\r\n!pip show datasets\r\nName: datasets\r\nVersion: 2.16.1\r\nSummary: HuggingFace community-driven open-source library of datasets\r\nHome-page: https://github.com/huggingface/datasets\r\nAuthor: HuggingFace Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache 2.0\r\nLocation: /opt/conda/lib/python3.10/site-packages\r\nRequires: aiohttp, dill, filelock, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, pyarrow-hotfix, pyyaml, requests, tqdm, xxhash\r\nRequired-by: trl\r\n```",
"> I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle?\r\n\r\nDon't know about other people. But I am having this issue whose solution I can't find anywhere. And this issue still persists. ",
"> I have the same issue now and didn't have this problem around 2 weeks ago.\r\n\r\nSame here",
"I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.\r\n",
"> I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.\r\n\r\nI also have datasets version 2.16, but the error is still there.",
"Can you try re-installing `datasets` ?",
"> Can you try re-installing `datasets` ?\r\n\r\nI tried re-installing. Still getting the same error. \r\n",
"> > Can you try re-installing `datasets` ?\r\n> \r\n> I tried re-installing. Still getting the same error.\r\n\r\nIn kaggle I used:\r\n- `%pip install -U datasets`\r\nand then restarted runtime and then everything works fine.",
"> > > Can you try re-installing `datasets` ?\r\n> > \r\n> > \r\n> > I tried re-installing. Still getting the same error.\r\n> \r\n> In kaggle I used:\r\n> \r\n> * `%pip install -U datasets`\r\n> and then restarted runtime and then everything works fine.\r\n\r\nYes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?",
"> > > > Can you try re-installing `datasets` ?\r\n> > > \r\n> > > \r\n> > > I tried re-installing. Still getting the same error.\r\n> > \r\n> > \r\n> > In kaggle I used:\r\n> > \r\n> > * `%pip install -U datasets`\r\n> > and then restarted runtime and then everything works fine.\r\n> \r\n> Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?\r\nFor some packages it is required.\r\nhttps://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab\r\n",
"> > > > > Can you try re-installing `datasets` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried re-installing. Still getting the same error.\r\n> > > \r\n> > > \r\n> > > In kaggle I used:\r\n> > > \r\n> > > * `%pip install -U datasets`\r\n> > > and then restarted runtime and then everything works fine.\r\n> > \r\n> > \r\n> > Yes, this is working. When I restart the runtime after installing packages, it's working perfectly. Thank you so much. But why do we need to restart runtime every time after installing packages?\r\n> > For some packages it is required.\r\n> > https://stackoverflow.com/questions/57831187/need-to-restart-runtime-before-import-an-installed-package-in-colab\r\n\r\nThank you for your assistance. I dedicated the past 2-3 weeks to resolving this issue. Interestingly, it runs flawlessly in Colab without requiring a runtime restart. However, the problem persisted exclusively in Kaggle. I appreciate your help once again. Thank you.",
"Closing this issue as it is not related to the datasets library; rather, it's linked to platform-related issues."
] | 1970-01-01T00:00:00.000001 | 1,704 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging
)
from peft import LoraConfig, PeftModel
from trl import SFTTrainer
from huggingface_hub import login
import pandas as pd
```
Error:
````
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[5], line 14
4 from transformers import (
5 AutoModelForCausalLM,
6 AutoTokenizer,
(...)
11 logging
12 )
13 from peft import LoraConfig, PeftModel
---> 14 from trl import SFTTrainer
15 from huggingface_hub import login
16 import pandas as pd
File /opt/conda/lib/python3.10/site-packages/trl/__init__.py:21
8 from .import_utils import (
9 is_diffusers_available,
10 is_npu_available,
(...)
13 is_xpu_available,
14 )
15 from .models import (
16 AutoModelForCausalLMWithValueHead,
17 AutoModelForSeq2SeqLMWithValueHead,
18 PreTrainedModelWrapper,
19 create_reference_model,
20 )
---> 21 from .trainer import (
22 DataCollatorForCompletionOnlyLM,
23 DPOTrainer,
24 IterativeSFTTrainer,
25 PPOConfig,
26 PPOTrainer,
27 RewardConfig,
28 RewardTrainer,
29 SFTTrainer,
30 )
33 if is_diffusers_available():
34 from .models import (
35 DDPOPipelineOutput,
36 DDPOSchedulerOutput,
37 DDPOStableDiffusionPipeline,
38 DefaultDDPOStableDiffusionPipeline,
39 )
File /opt/conda/lib/python3.10/site-packages/trl/trainer/__init__.py:44
42 from .ppo_trainer import PPOTrainer
43 from .reward_trainer import RewardTrainer, compute_accuracy
---> 44 from .sft_trainer import SFTTrainer
45 from .training_configs import RewardConfig
File /opt/conda/lib/python3.10/site-packages/trl/trainer/sft_trainer.py:23
21 import torch.nn as nn
22 from datasets import Dataset
---> 23 from datasets.arrow_writer import SchemaInferenceError
24 from datasets.builder import DatasetGenerationError
25 from transformers import (
26 AutoModelForCausalLM,
27 AutoTokenizer,
(...)
33 TrainingArguments,
34 )
ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py
````
transformers version: 4.36.2
python version: 3.10.12
datasets version: 2.16.1
### Steps to reproduce the bug
1. Install packages
```
!pip install -U datasets trl accelerate peft bitsandbytes transformers trl huggingface_hub
```
2. import packages
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging
)
from peft import LoraConfig, PeftModel
from trl import SFTTrainer
from huggingface_hub import login
import pandas as pd
```
### Expected behavior
No error while importing
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.1
- PyArrow version: 11.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/131662185?v=4",
"events_url": "https://api.github.com/users/Sonali-Behera-TRT/events{/privacy}",
"followers_url": "https://api.github.com/users/Sonali-Behera-TRT/followers",
"following_url": "https://api.github.com/users/Sonali-Behera-TRT/following{/other_user}",
"gists_url": "https://api.github.com/users/Sonali-Behera-TRT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sonali-Behera-TRT",
"id": 131662185,
"login": "Sonali-Behera-TRT",
"node_id": "U_kgDOB9kBaQ",
"organizations_url": "https://api.github.com/users/Sonali-Behera-TRT/orgs",
"received_events_url": "https://api.github.com/users/Sonali-Behera-TRT/received_events",
"repos_url": "https://api.github.com/users/Sonali-Behera-TRT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sonali-Behera-TRT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sonali-Behera-TRT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sonali-Behera-TRT",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6538/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6537/comments | https://api.github.com/repos/huggingface/datasets/issues/6537/events | https://github.com/huggingface/datasets/issues/6537 | 2,057,132,173 | I_kwDODunzps56nViN | 6,537 | Adding support for netCDF (*.nc) files | {
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shermansiu",
"id": 12627125,
"login": "shermansiu",
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shermansiu",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Related to #3113 ",
"Conceptually, we can use xarray to load the netCDF file, then xarray -> pandas -> pyarrow.",
"I'd still need to verify that such a conversion would be lossless, especially for multi-dimensional data."
] | 1970-01-01T00:00:00.000001 | 1,703 | null | NONE | null | ### Feature request
netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`.
### Motivation
When uploading *.nc files onto Huggingface Hub through the `datasets` API, I would like to be able to preview the dataset without converting it to another format.
### Your contribution
I can submit a PR, provided I have the time. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6537/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6536/comments | https://api.github.com/repos/huggingface/datasets/issues/6536/events | https://github.com/huggingface/datasets/issues/6536 | 2,056,863,239 | I_kwDODunzps56mT4H | 6,536 | datasets.load_dataset raises FileNotFoundError for datasets==2.16.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/46237844?v=4",
"events_url": "https://api.github.com/users/ArvinZhuang/events{/privacy}",
"followers_url": "https://api.github.com/users/ArvinZhuang/followers",
"following_url": "https://api.github.com/users/ArvinZhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/ArvinZhuang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArvinZhuang",
"id": 46237844,
"login": "ArvinZhuang",
"node_id": "MDQ6VXNlcjQ2MjM3ODQ0",
"organizations_url": "https://api.github.com/users/ArvinZhuang/orgs",
"received_events_url": "https://api.github.com/users/ArvinZhuang/received_events",
"repos_url": "https://api.github.com/users/ArvinZhuang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArvinZhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArvinZhuang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArvinZhuang",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null | [
"Hi ! Thanks for reporting\r\n\r\nThis is a bug in 2.16.0 for some datasets when `cache_dir` is a relative path. I opened https://github.com/huggingface/datasets/pull/6543 to fix this",
"We just released 2.16.1 with a fix:\r\n\r\n```\r\npip install -U datasets\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Seems `datasets.load_dataset` raises FileNotFoundError for some hub datasets with the latest `datasets==2.16.0`
### Steps to reproduce the bug
For example `pip install datasets==2.16.0`
then
```python
import datasets
datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache1')["train"]
```
This will raise:
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/load.py", line 2545, in load_dataset
builder_instance.download_and_prepare(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1003, in download_and_prepare
self._download_and_prepare(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1076, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 43, in _split_generators
data_files = dl_manager.download_and_extract(self.config.data_files)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 566, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 539, in extract
extracted_paths = map_nested(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 466, in map_nested
mapped = [
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 467, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 370, in _single_map_nested
return function(data_struct)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 451, in _download
out = cached_path(url_or_filename, download_config=download_config)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 188, in cached_path
output_path = get_from_cache(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 570, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wentingzhao/anthropic-hh-first-prompt/resolve/11b393a5545f706a357ebcd4a5285d93db176715/cache1/downloads/87d66c365626feca116cba323c4856c9aae056e4503f09f23e34aa085eb9de15
```
However, seems it works fine for some datasets, for example, if works fine for `datasets.load_dataset("ag_news", cache_dir='cache2')["test"]`
But the dataset works fine for datasets==2.15.0, for example `pip install datasets==2.15.0`,
then
```python
import datasets
datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache3')["train"]
Dataset({
features: ['user', 'system', 'source'],
num_rows: 8552
})
```
### Expected behavior
2.16.0 should work as same as 2.15.0 for all datasets
### Environment info
python3.9
conda env
tested on MacOS and Linux | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6536/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6536/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6535 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6535/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6535/comments | https://api.github.com/repos/huggingface/datasets/issues/6535/events | https://github.com/huggingface/datasets/issues/6535 | 2,056,264,339 | I_kwDODunzps56kBqT | 6,535 | IndexError: Invalid key: 47682 is out of bounds for size 0 while using PEFT | {
"avatar_url": "https://avatars.githubusercontent.com/u/57484266?v=4",
"events_url": "https://api.github.com/users/MahavirDabas18/events{/privacy}",
"followers_url": "https://api.github.com/users/MahavirDabas18/followers",
"following_url": "https://api.github.com/users/MahavirDabas18/following{/other_user}",
"gists_url": "https://api.github.com/users/MahavirDabas18/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MahavirDabas18",
"id": 57484266,
"login": "MahavirDabas18",
"node_id": "MDQ6VXNlcjU3NDg0MjY2",
"organizations_url": "https://api.github.com/users/MahavirDabas18/orgs",
"received_events_url": "https://api.github.com/users/MahavirDabas18/received_events",
"repos_url": "https://api.github.com/users/MahavirDabas18/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MahavirDabas18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MahavirDabas18/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MahavirDabas18",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@sabman @pvl @kashif @vigsterkr ",
"This is surely the same issue as https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/25 that comes from the `transformers` `Trainer`. You should add `remove_unused_columns=False` to `TrainingArguments`\r\n\r\nAlso check your logs: the `Trainer` should log the length of your dataset before training starts and it surely showed length=0.",
"the same error \r\nIndexError: Invalid key: 22330 is out of bounds for size 0"
] | 1970-01-01T00:00:00.000001 | 1,707 | null | NONE | null | ### Describe the bug
I am trying to fine-tune the t5 model on the paraphrasing task. While running the same code without-
model = get_peft_model(model, config)
the model trains without any issues. However, using the model returned from get_peft_model raises the following error due to datasets-
IndexError: Invalid key: 47682 is out of bounds for size 0.
I had raised this in https://github.com/huggingface/peft/issues/1299#issue-2056173386 and they suggested that I raise it here.
Here is the complete error-
IndexError Traceback (most recent call last)
in <cell line: 1>()
----> 1 trainer.train()
11 frames
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1553 hf_hub_utils.enable_progress_bars()
1554 else:
-> 1555 return inner_training_loop(
1556 args=args,
1557 resume_from_checkpoint=resume_from_checkpoint,
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1836
1837 step = -1
-> 1838 for step, inputs in enumerate(epoch_iterator):
1839 total_batched_samples += 1
1840 if rng_to_sync:
[/usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py](https://localhost:8080/#) in iter(self)
446 # We iterate one batch ahead to check when we are at the end
447 try:
--> 448 current_batch = next(dataloader_iter)
449 except StopIteration:
450 yield
[/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in next(self)
628 # TODO(https://github.com/pytorch/pytorch/issues/76750)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
[/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self)
672 def _next_data(self):
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
[/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index)
47 if self.auto_collation:
48 if hasattr(self.dataset, "getitems") and self.dataset.getitems:
---> 49 data = self.dataset.getitems(possibly_batched_index)
50 else:
51 data = [self.dataset[idx] for idx in possibly_batched_index]
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitems(self, keys)
2802 def getitems(self, keys: List) -> List:
2803 """Can be used to get a batch using a list of integers indices."""
-> 2804 batch = self.getitem(keys)
2805 n_examples = len(batch[next(iter(batch))])
2806 return [{col: array[i] for col, array in batch.items()} for i in range(n_examples)]
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitem(self, key)
2798 def getitem(self, key): # noqa: F811
2799 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2800 return self._getitem(key)
2801
2802 def getitems(self, keys: List) -> List:
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _getitem(self, key, **kwargs)
2782 format_kwargs = format_kwargs if format_kwargs is not None else {}
2783 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs)
-> 2784 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
2785 formatted_output = format_table(
2786 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in query_table(table, key, indices)
581 else:
582 size = indices.num_rows if indices is not None else table.num_rows
--> 583 _check_valid_index_key(key, size)
584 # Query the main table
585 if indices is None:
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size)
534 elif isinstance(key, Iterable):
535 if len(key) > 0:
--> 536 _check_valid_index_key(int(max(key)), size=size)
537 _check_valid_index_key(int(min(key)), size=size)
538 else:
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size)
524 if isinstance(key, int):
525 if (key < 0 and key + size < 0) or (key >= size):
--> 526 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
527 return
528 elif isinstance(key, slice):
IndexError: Invalid key: 47682 is out of bounds for size 0
### Steps to reproduce the bug
device = "cuda:0" if torch.cuda.is_available() else "cpu"
#defining model name for tokenizer and model loading
model_name= "t5-small"
#loading the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
def preprocess_function(data, tokenizer):
inputs = [f"Paraphrase this sentence: {doc}" for doc in data["text"]]
model_inputs = tokenizer(inputs, max_length=150, truncation=True)
labels = [ast.literal_eval(i)[0] for i in data['paraphrases']]
labels = tokenizer(labels, max_length=150, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
train_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000))
val_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000,55000))
tokenized_train = train_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True)
tokenized_val = val_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True)
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
config = LoraConfig(
r=16, #attention heads
lora_alpha=32, #alpha scaling
lora_dropout=0.05,
bias="none",
task_type="Seq2Seq"
)
#loading the model
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
model = get_peft_model(model, config)
print_trainable_parameters(model)
#loading the data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer=tokenizer,
model=model,
label_pad_token_id=-100,
padding="longest"
)
#defining the training arguments
training_args = Seq2SeqTrainingArguments(
output_dir=os.getcwd(),
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=1e-3,
save_total_limit=3,
load_best_model_at_end=True,
num_train_epochs=1,
predict_with_generate=True
)
def compute_metric_with_extra(tokenizer):
def compute_metrics(eval_preds):
metric = evaluate.load('rouge')
preds, labels = eval_preds
# decode preds and labels
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# rougeLSum expects newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
return result
return compute_metrics
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_val,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics= compute_metric_with_extra(tokenizer)
)
trainer.train()
### Expected behavior
I would want the trainer to train normally as it was before I used-
model = get_peft_model(model, config)
### Environment info
datasets version- 2.16.0
peft version- 0.7.1
transformers version- 4.35.2
accelerate version- 0.25.0
python- 3.10.12
enviroment- google colab | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6535/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6535/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6534/comments | https://api.github.com/repos/huggingface/datasets/issues/6534/events | https://github.com/huggingface/datasets/issues/6534 | 2,056,002,548 | I_kwDODunzps56jBv0 | 6,534 | How to configure multiple folders in the same zip package | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@albertvillanova"
] | 1970-01-01T00:00:00.000001 | 1,703 | null | CONTRIBUTOR | null | How should I write "config" in readme when all the data, such as train test, is in a zip file
train floder and test floder in data.zip | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6534/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6534/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6533/comments | https://api.github.com/repos/huggingface/datasets/issues/6533/events | https://github.com/huggingface/datasets/issues/6533 | 2,055,929,101 | I_kwDODunzps56iv0N | 6,533 | ted_talks_iwslt | Error: Config name is missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/35850903?v=4",
"events_url": "https://api.github.com/users/rayliuca/events{/privacy}",
"followers_url": "https://api.github.com/users/rayliuca/followers",
"following_url": "https://api.github.com/users/rayliuca/following{/other_user}",
"gists_url": "https://api.github.com/users/rayliuca/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rayliuca",
"id": 35850903,
"login": "rayliuca",
"node_id": "MDQ6VXNlcjM1ODUwOTAz",
"organizations_url": "https://api.github.com/users/rayliuca/orgs",
"received_events_url": "https://api.github.com/users/rayliuca/received_events",
"repos_url": "https://api.github.com/users/rayliuca/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rayliuca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rayliuca/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rayliuca",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null | [
"Hi ! Thanks for reporting. I opened https://github.com/huggingface/datasets/pull/6544 to fix this",
"We just released 2.16.1 with a fix:\r\n\r\n```\r\npip install -U datasets\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,703 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Running load_dataset using the newest `datasets` library like below on the ted_talks_iwslt using year pair data will throw an error "Config name is missing"
see also:
https://huggingface.co/datasets/ted_talks_iwslt/discussions/3
likely caused by #6493, where the `and not config_kwargs` part in the if logic was removed
https://github.com/huggingface/datasets/blob/ef3b5dd3633995c95d77f35fb17f89ff44990bc4/src/datasets/builder.py#L512
### Steps to reproduce the bug
run:
```python
load_dataset("ted_talks_iwslt", language_pair=("ja", "en"), year="2015")
```
### Expected behavior
Load the data without error
### Environment info
datasets 2.16.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6533/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6533/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6532/comments | https://api.github.com/repos/huggingface/datasets/issues/6532/events | https://github.com/huggingface/datasets/issues/6532 | 2,055,631,201 | I_kwDODunzps56hnFh | 6,532 | [Feature request] Indexing datasets by a customly-defined id field to enable random access dataset items via the id | {
"avatar_url": "https://avatars.githubusercontent.com/u/3377221?v=4",
"events_url": "https://api.github.com/users/Yu-Shi/events{/privacy}",
"followers_url": "https://api.github.com/users/Yu-Shi/followers",
"following_url": "https://api.github.com/users/Yu-Shi/following{/other_user}",
"gists_url": "https://api.github.com/users/Yu-Shi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Yu-Shi",
"id": 3377221,
"login": "Yu-Shi",
"node_id": "MDQ6VXNlcjMzNzcyMjE=",
"organizations_url": "https://api.github.com/users/Yu-Shi/orgs",
"received_events_url": "https://api.github.com/users/Yu-Shi/received_events",
"repos_url": "https://api.github.com/users/Yu-Shi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Yu-Shi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yu-Shi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Yu-Shi",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"You can simply use a python dict as index:\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> ds = load_dataset(\"BeIR/dbpedia-entity\", \"corpus\", split=\"corpus\")\r\n>>> index = {key: idx for idx, key in enumerate(ds[\"_id\"])}\r\n>>> ds[index[\"<dbpedia:Pikachu>\"]]\r\n{'_id': '<dbpedia:Pikachu>',\r\n 'title': 'Pikachu',\r\n 'text': 'Pikachu (Japanese: ピカチュウ) are a fictional species of Pokémon. Pokémon are fictional creatures that appear in an assortment of comic books, animated movies and television shows, video games, and trading card games licensed by The Pokémon Company, a Japanese corporation. The Pikachu design was conceived by Ken Sugimori.'}\r\n```",
"Thanks for your reply. Yes, I can do that, but it is time-consuming to do that every time I launch the program (some datasets are extremely big). HF Datasets has a nice feature to support instant data loading and efficient random access via row ids. I'm curious if this beneficial feature could be further extended to custom data columns.\r\n",
"+1 on the issue I think it would be extremely useful",
"+1. This could be very useful.",
"+1 - currently having to manually implement this",
"If anyone has an idea how to do this in the right way (perhaps @albertvillanova ?) I would be happy to implement it",
"This would be very helpful to implement aspect ratio bucketing for image and video datasets"
] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Feature request
Some datasets may contain an id-like field, for example the `id` field in [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and the `_id` field in [BeIR/dbpedia-entity](https://huggingface.co/datasets/BeIR/dbpedia-entity). HF datasets support efficient random access via row, but not via this kinds of id fields. I wonder if it is possible to add support for indexing by a custom "id-like" field to enable random access via such ids. The ids may be numbers or strings.
### Motivation
In some cases, especially during inference/evaluation, I may want to find out the item that has a specified id, defined by the dataset itself.
For example, in a typical re-ranking setting in information retrieval, the user may want to re-rank the set of candidate documents of each query. The input is usually presented in a TREC-style run file, with the following format:
```
<qid> Q0 <docno> <rank> <score> <tag>
```
The re-ranking program should be able to fetch the queries and documents according to the `<qid>` and `<docno>`, which are the original id defined in the query/document datasets. To accomplish this, I have to iterate over the whole HF dataset to get the mapping from real ids to row ids every time I start the program, which is time-consuming. Thus I want HF dataset to provide options for users to index by a custom id column, not by row.
### Your contribution
I'm not an expert in this project and I'm afraid that I'm not able to make contributions on the code. | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6532/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6532/timeline | null | null | null | null | false | null |
Subsets and Splits