Datasets:
Modalities:
Text
Formats:
parquet
Sub-tasks:
slot-filling
Languages:
English
Size:
10K - 100K
License:
Update files from the datasets library (from 1.18.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.18.0
README.md
CHANGED
@@ -21,6 +21,7 @@ task_ids:
|
|
21 |
- other-other-token-classification-of-text-errors
|
22 |
- slot-filling
|
23 |
paperswithcode_id: null
|
|
|
24 |
---
|
25 |
|
26 |
# Dataset Card for YouTube Caption Corrections
|
@@ -172,4 +173,4 @@ https://github.com/2dot71mily/youtube_captions_corrections
|
|
172 |
|
173 |
### Contributions
|
174 |
|
175 |
-
Thanks to [@2dot71mily](https://github.com/2dot71mily) for adding this dataset.
|
|
|
21 |
- other-other-token-classification-of-text-errors
|
22 |
- slot-filling
|
23 |
paperswithcode_id: null
|
24 |
+
pretty_name: YouTube Caption Corrections
|
25 |
---
|
26 |
|
27 |
# Dataset Card for YouTube Caption Corrections
|
|
|
173 |
|
174 |
### Contributions
|
175 |
|
176 |
+
Thanks to [@2dot71mily](https://github.com/2dot71mily) for adding this dataset.
|