datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknowndate 2021-04-29 15:34:29
2025-03-26 01:30:06
| downloads
int64 0
4.51M
| likes
int64 0
7.64k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
47
⌀ | createdAt
unknowndate 2022-03-02 23:29:22
2025-03-26 01:27:57
| card
stringlengths 15
1.02M
|
---|---|---|---|---|---|---|---|---|
hssd/hssd-hab | hssd | "2025-02-14T02:19:58Z" | 12,363 | 35 | [
"language:en",
"license:cc-by-nc-4.0",
"region:us",
"3D scenes",
"Embodied AI"
] | null | "2023-06-04T18:59:50Z" | ---
language:
- en
pretty_name: HSSD
tags:
- 3D scenes
- Embodied AI
license: cc-by-nc-4.0
extra_gated_heading: "Acknowledge license to accept the repository"
extra_gated_prompt: "You agree to use this dataset under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/) terms"
viewer: false
---
HSSD: Habitat Synthetic Scenes Dataset
==================================
The [Habitat Synthetic Scenes Dataset (HSSD)](https://3dlg-hcvc.github.io/hssd/) is a human-authored 3D scene dataset that more closely mirrors real scenes than prior datasets.
Our dataset represents real interiors and contains a diverse set of 211 scenes and more than 18000 models of real-world objects.
<img src="https://i.imgur.com/XEkLxNs.png" width=50%>
This repository provides a Habitat consumption-ready compressed version of HSSD.
See [this repository](https://huggingface.co/datasets/hssd/hssd-models) for corresponding uncompressed assets.
## Dataset Structure
```
├── objects
│ ├── */*.glb
│ ├── */*.collider.glb
│ ├── */*.filteredSupportSurface(.ply|.glb)
│ ├── */*.object_config.json
├── stages
│ ├── *.glb
│ ├── *.stage_config.json
├── scenes
│ ├── *.scene_instance.json
├── scenes_uncluttered
│ ├── *.scene_instance.json
├── scenes_articulated
│ ├── *.scene_instance.json
├── scene_filter_files
│ ├── *.rec_filter.json
├── metadata
│ ├── *.csv
│ ├── *.json
├── semantics
│ ├── hssd-hab_semantic_lexicon.json
│ ├── scenes
| ├── *.semantic_config.json
├── urdf
│ ├── <model_name>
| ├── *.glb
| ├── *.urdf
| ├── *.ao_config.json
└── hssd-hab.scene_dataset_config.json
└── hssd-hab-uncluttered.scene_dataset_config.json
└── hssd-hab-articulated.scene_dataset_config.json
```
- `hssd-hab.scene_dataset_config.json`: This SceneDataset config file aggregates the assets and metadata necessary to fully describe the set of stages, objects, and scenes constituting the dataset.
- `objects`: 3D models representing distinct objects that are used to compose scenes. Contains configuration files, render assets, collider assets, and Receptacle mesh assets.
- `stages`: A stage in Habitat is the set of static mesh components which make up the backdrop of a scene (e.g. floor, walls, stairs, etc.).
- `scenes`: A scene is a single 3D world composed of a static stage and a variable number of objects.
- `scene_filter_files`: These .rec_filter.json files contain mappings of Receptacle instance unique_names to active or filtered sets based on their locations and accessibility within the scene. They also contain a "within_set" defining Receptacles which can only be accessed when the parent Furniture object's "default_link" is in the "open" state.
- `metadata`: The metadata directory contains several csv and json files which provide semantic mappings for objects in the dataset as well as rational mappings from regions to the types of clutter objects typically found in them to support procedural generation.
- `semantics`: Primarily defines instance semantics for the scenes. *.semantic_config.json files contain the region annotations for each scene.
- `urdf`: The urdf directory contains the articulated furniture assets, each contained in its own sub-directory named after the source asset. The .urdf files define the articulation properties. Each .glb file is either a render asset or Receptacle mesh connected to a rigid link. The .ao_config.json file contains habitat-specific metadata such as markersets and Receptacle definitions.
### Rearrange-ready assets:
Supporting Habitat 3.0 embodied rearrangement tasks with updated colliders, adjusted and de-cluttered scene contents, receptacle meshes, and receptacle filter files. See [aihabitat.org/habitat3/](aihabitat.org/habitat3/) for more details.
- `hssd-hab-uncluttered.scene_dataset_config.json`: This SceneDataset config file aggregates adds the adjusted and uncluttered scenes for rearrangement tasks.
- `scenes_uncluttered`: Contains the adjusted scene instance configuration files.
- `scene_filter_files`: A scene filter file organizes available Receptacle instances in a scene into active and inactive groups based on simualtion heuristics and manual edits. It is consumed by the RearrangeEpisodeGenerator to construct valid RearrangeEpisodeDatasets.
### Articulated scenes and assets:
Introduced in `v0.3.0`, the `hssd-hab-articulated.scene_dataset_config.json` SceneDataset provides 202 fully articulated HSSD scenes ready for use within the AI Habitat simulation ecosystem. Note that only 161 are publicly available on this repo. The remainder and their unique assets are reserved as an internal test set.
To enable more realistic indoor object manipulation, articulated 3D furniture models such as drawers, cabinets, and appliances were added to replace rigid assets. These models were converted from rigid source assets in HSSD and swapped into the scenes.
Furniture is annotated with a set of Receptacles (surfaces which support small object placement such as shelves and drawers) and can be opened and closed by the agents. Receptacles are further filtered contextually in each scene to ensure that the active set is accessible to the agents.
Additional annotations include point or marker sets for each furniture, region annotations, and semantic classification of objects.
## Getting Started
To load HSSD scenes into the Habitat simulator, you can start by installing [habitat-sim](https://github.com/facebookresearch/habitat-sim) using instructions specified [here](https://github.com/facebookresearch/habitat-sim#installation).
Once installed, you can run the interactive Habitat viewer to load a scene:
```
habitat-viewer --dataset /path/to/hssd-hab/hssd-hab.scene_dataset_config.json -- 102344280
# or ./build/viewer if compiling from source
```
You can find more information about using the interactive viewer [here](https://github.com/facebookresearch/habitat-sim#testing:~:text=path/to/data/-,Interactive%20testing,-%3A%20Use%20the%20interactive).
Habitat-Sim is typically used with [Habitat-Lab](https://github.com/facebookresearch/habitat-lab), a modular high-level library for end-to-end experiments in embodied AI.
To define embodied AI tasks (e.g. navigation, instruction following, question answering), train agents, and benchmark their performance using standard metrics, you can download habitat-lab using the instructions provided [here](https://github.com/facebookresearch/habitat-lab#installation).
## Changelog
- `v0.3.0`: **Articulated Scenes and PARTNR support**
- This major version update adds a large set of changes to support the introduction of 202 articulated HSSD scenes and the [PARTNR benchmark](https://github.com/facebookresearch/partnr-planner).
- Includes improvements to stage texture/geometry and object collision shapes and receptacles.
- Adds:
- 2000+ articulated assets in the urdf/ directory representing and replacing rigid furniture objects. Annotated with Receptacles and semantics.
- 202 new articulated scenes with rigid objects replaced by AOs. These are uncluttered and often significantly altered from originals to accommodate the new assets.
- Note that test scenes and assets are removed before migration to this repo.
- Receptacle filter files for new scenes annotating accessible Receptacles and "within" Receptacles (those which require opening an articulated link for access).
- Note that only one link per AO is configured with an active Receptacle. This is based on logic in PARTNR and habitat-lab (default_link).
- Region volume semantic annotations to all scenes
- Semantic lexicon file with updated classes
- Metadata files mapping object semantics and common-sense object->region sets for PARTNR
- `v0.2.5`: **Rearrange-ready HSSD**
- Note: this is a checkpoint. Known issues exist and continued polish is ongoing.
- Adds Receptacle meshes describing support surfaces for small objects (e.g. table or shelf surfaces).
- Adds collider meshes (.collider.glb) for assets with Receptacle meshes to support simulation.
- Adds new scenes 'scenes_uncluttered' and new SceneDataset 'hssd-hab-uncluttered' containing adjusted and de-cluttered versions of the scenes for use in embodied rearrangement tasks.
- Adds 'scene_filter_files' which sort Receptacles in each scene into active and inactive groups for RearrangeEpisode generation.
- `v0.2.4`:
- Recompresses several object GLBs to preserve PBR material status.
- Adds CSV with object metadata and semantic lexicon files for Habitat.
- Adds train/val scene splits file.
- `v0.2.3`: First release.
|
neashton/ahmedml | neashton | "2025-02-15T15:42:44Z" | 12,327 | 1 | [
"license:cc-by-sa-4.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2407.20801",
"region:us"
] | null | "2025-02-10T11:19:24Z" | ---
license: cc-by-sa-4.0
---
AhmedML: High-Fidelity Computational Fluid Dynamics dataset for incompressible, low-speed bluff body aerodynamics
-------
Contact:
----------
Neil Ashton (NVIDIA) - [email protected]
website:
----------
https://caemldatasets.org
Summary:
-------
This dataset contains 500 different geometric variations of the Ahmed Car Body - a simplified car-like shape that exhibits many of the flow topologies that are present on bluff bodies such as road vehicles. The dataset contains a wide range of geometries that exhibit fundamental flow physics such as geometry and pressure-induced flow separation of flows as well as 3D vortical structures. Each variation of the Ahmed car body were run using a time-accurate hybrid Reynolds-Averaged Navier-Stokes (RANS) - Large-Eddy Simulation (LES) turbulence modelling approach using the open-source CFD code OpenFOAM. The dataset contains both surface boundary, 3D volume, geometry STL and forces/moments in open-source formats (.vtu,.vtp).
CFD Solver:
----------
All cases were run using the open-source finite-volume code OpenFOAM v2212. Each case was run transiently for approximately 80 convective time units (CTU) on meshes of approximately 20M cells. Please see the paper for full details on the code and validation:
How to cite this dataset:
----------------
In order to cite the use of this dataset please cite the paper below which contains full details on the dataset. It can be found here: https://arxiv.org/abs/2407.20801
@article{ashton2024ahmed,
title = {{AhmedML: High-Fidelity Computational Fluid Dynamics dataset for incompressible, low-speed bluff body aerodynamics}},
year = {2024},
journal = {arxiv.org},
author = {Ashton, Neil and Maddix, Danielle and Gundry, Samuel and Shabestari, Parisa}
}
Files:
-------
Each folder (e.g run_1,run_2...run_"i" etc) corresponds to a different geometry that contains the following files where "i" is the run number:
* ahmed_i.stl : geometry stl (~5mb):
* geo_parameters_1.csv (missing run 500): parameters that define the geometry
* boundary_i.vtp : Boundary VTP (~500mb)
* volume_i.vtu : Volume field VTU (~5GB)
* force_mom_i.csv : forces (Cd,Cl) time-averaged with constant reference area
* force_mom_varref_i.csv : forces (Cd,Cl) time-averaged with varying reference area
* slices : folder containing .vtp slices in x,y,z that contain flow-field variables
* images : (folder) that contains images of the following variables (CpT, UxMean) for slices of the domain in the X,Y & Z locations.
In addition we provide:
* force_mom_all.csv : run, cd,cl for all runs in a single file
* force_mom_varref_all.csv : run, cd,cl for all runs in a single file with varying reference area
* geo_parameters_all.csv : all the geometry parameters for each run inside a single file
* ahmedml.slvs : SolveSpace input file to create the parametric geometries
* stl : folder containing stl files that were used as inputs to the OpenFOAM process
* openfoam-casesetup.tgz : complete OpenFOAM setup that can be used to extend or reproduce the dataset
* validation : folder containing full outputs from all four mesh levels that were used to validate the methodology
Acknowledgements
-----------
* OpenFOAM solver and workflow development by Neil Ashton (Amazon Web Services, now NVIDIA)
* Geometry parameterization by Samuel Gundry (Amazon Web Services) and Parisa Shabestari (Amazon Web Services)
* Guidance on dataset preparation for ML by Danielle Madix (Amazon Web Services)
* Simulation runs, HPC setup and dataset preparation by Neil Ashton (Amazon Web Services, now NVIDIA)
License
----
This dataset is provided under the CC BY SA 4.0 license, please see LICENSE.txt for full license text.
version history:
---------------
* 15/02/2025 - files uploaded to HuggingFace
* 12/11/2024 - added validation folder that contains the full output from all four mesh levels that were used to validate the methodology used.
* 04/08/2024 - updates to the file description and arxiv paper
* 05/06/2024 - global forces/geo added for all runs
* 01/05/2024 - force/moments corrected (prior version had incorrect Cs data)
* 18/04/2024 - draft version produced
|
Tuxifan/UbuntuIRC | Tuxifan | "2023-06-04T15:35:31Z" | 12,259 | 0 | [
"task_categories:text-generation",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | "2023-06-02T22:48:40Z" | ---
license: cc0-1.0
task_categories:
- text-generation
pretty_name: Ubuntu IRC channels
---
Completely uncurated collection of IRC logs from the Ubuntu IRC channels |
wendlerc/RenderedText | wendlerc | "2023-07-12T09:28:10Z" | 12,246 | 41 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"language:en",
"size_categories:10M<n<100M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us",
"OCR",
"blender",
"LAION",
"Stability"
] | [
"text-to-image",
"image-to-text"
] | "2023-06-26T11:26:16Z" | ---
task_categories:
- text-to-image
- image-to-text
language:
- en
tags:
- OCR
- blender
- LAION
- Stability
size_categories:
- 10M<n<100M
---
*This dataset has been created by Stability AI and LAION.*
This dataset contains 12 million 1024x1024 images of handwritten text written on a digital 3D sheet of paper generated using Blender geometry nodes and rendered using Blender Cycles. The text has varying font size, color, and rotation, and the paper was rendered under random lighting conditions.
Note that, the first 10 million examples are in the root folder of this dataset repository and the remaining 2 million are in ./remaining (due to the constraint on number of files per directory).
It was generated with the script https://github.com/GbotHQ/ocr-dataset-rendering/, which utilizes:
- ~8000 fonts from https://www.urbanfonts.com/free-fonts.htm and https://www.fontspace.com/
- 643 CC0 HDRIs from https://polyhaven.com/
- 1837 CC0 PRB materials from https://ambientcg.com/
- random sentences sampled from https://huggingface.co/datasets/ChristophSchuhmann/wikipedia-en-nov22-1-sentence-level and https://huggingface.co/datasets/ChristophSchuhmann/1-sentence-level-gutenberg-en_arxiv_pubmed_soda
to generate example images as shown below.


The dataset contains both line-level, as well as character level annotations for each example. The annotations are stored in the accompanying json files and are of the following form:
```
{
'ocr_annotation':
{'bounding_boxes': [[[145.0, 370.0], [788.0, 353.0], [827.0, 633.0], [182.0, 669.0]]],
'text': ['Joe.'],
'bb_relative': [[[0.1416015625, 0.361328125], [0.76953125, 0.3447265625], [0.8076171875, 0.6181640625], [0.177734375, 0.6533203125]]],
'char': ['J', 'o', 'e', '.'],
'char_idx': [0, 1, 2, 3],
'bb_character_level': [[[145.0, 370.0], [346.0, 365.0], [382.0, 651.0], [181.0, 662.0]], [[375.0, 438.0], [557.0, 431.0], [585.0, 640.0], [402.0, 650.0]], [[578.0, 440.0], [744.0, 434.0], [771.0, 629.0], [604.0, 638.0]], [[778.0, 591.0], [821.0, 589.0], [827.0, 633.0], [784.0, 635.0]]],
'font_path': '/fsx/home-wendlerc/blender-dataset/assets/fonts/fontcollection/HelloScribbles-axapm.ttf',
'font_color': [17, 25, 231],
'text_rotation_angle': 7},
'width':1024,
'height':1024,
}
```
Browse a few more examples here: https://colab.research.google.com/drive/1o0rZhtY9aeurzNrAbu6nJypULSIIcf1v?authuser=1 |
MMInstruction/ArxivCap | MMInstruction | "2024-10-03T03:17:00Z" | 12,238 | 50 | [
"task_categories:image-to-text",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2403.00231",
"region:us",
"arxiv",
"multi-modal"
] | [
"image-to-text"
] | "2023-12-01T15:47:54Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
language:
- en
pretty_name: ArxivCap
size_categories:
- 1M<n<10M
tags:
- arxiv
- multi-modal
---
# Dataset Card for ArxivCap
## Table of Contents
- [Dataset Card for ArxivCap](#dataset-card-for-arxivcap)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Curation Process](#curation-process)
- [Dataset Structure](#dataset-structure)
- [Data Loading](#data-loading)
- [Data Fields](#data-fields)
- [Data Instances](#data-instances)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper:** [Multimodal ArXiv](https://arxiv.org/abs/2403.00231)
- **Point of Contact:** [email protected]
- **HomePage**: https://mm-arxiv.github.io/
### Data Instances
<details>
<summary>Example-1 of single (image, caption) pairs</summary>
"......" stands for omitted parts.

```
{
'src': 'arXiv_src_2112_060/2112.08947',
'meta':
{
'meta_from_kaggle':
{
'journey': '',
'license': 'http://arxiv.org/licenses/nonexclusive-distrib/1.0/',
'categories': 'cs.ET'
},
'meta_from_s2':
{
'citationCount': 8,
'influentialCitationCount': 0,
'publicationTypes': ['JournalArticle']
}
},
'arxiv_id': '2112.08947',
'title': 'Computational metrics and parameters of an injection-locked large area semiconductor laser for neural network computing',
'abstract': 'Artificial neural networks have become a staple computing technique in many fields. Yet, they present fundamental differences with classical computing hardware in the way they process information. Photonic implementations of neural network architectures potentially offer fundamental advantages over their electronic counterparts in terms of speed, processing parallelism, scalability and energy efficiency. Scalable and high performance photonic neural networks (PNNs) have been demonstrated, yet they remain scarce. In this work, we study the performance of such a scalable, fully parallel and autonomous PNN based on a large area vertical-cavity surface-emitting laser\n(LA-VCSEL). We show how the performance varies with different physical parameters, namely, injection wavelength, injection power, and bias current. Furthermore, we link these physical parameters to the general computational measures of consistency and dimensionality. We present a general method of gauging dimensionality in high dimensional nonlinear systems subject to noise, which could be applied to many systems in the context of neuromorphic computing. Our work will inform future implementations of spatially multiplexed VCSEL PNNs.\n',
'caption_images':
[
{
'caption': '(a) Working principle of the LA-VCSEL spatially multiplexed reservoir. (b) Input information $\\mathbf{u}$ and the subsequent LA-VCSEL response for 3-bit binary headers. The graph shows the target output $y^{\\text{target}}$ (yellow) for classifying header 001 and different reservoir outputs $y^{\\text{out}}$ of decreasing mean square error (MSE) (red, blue and green). (c) Schematic illustration of the error landscape, showing the MSE as a function of the output weights configuration. The outlined (red, blue and green) Boolean matrices correspond to the output weights giving the output from (b). (d) Representative performance of the PNN on a 6-bit header recognition task.',
'cil_pairs':
[
{
'sub_caption': '',
'image_file': 'arXiv_src_2112_060/2112.08947_0.jpg',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2016x1063 at 0x7F098E288040>,
'image_ocr': ['(a)', 'LA-VCSEL', 'DMDa', 'DMD', 'MMF', 'DET', 'Win', 'xt', 'Spatial positions', 'Output', 'Input', 'Wint', 'Carrier diffusion', 'Cavity diffraction', 'Reservoir', '(d)50', '6bit HR', 'Error(MSE)', '830', '001', '000', '001', '100', '001', '111', 'ER', 'S', '10', '0', 'Configuration DMD.', '0', '1000', 'Input examples', 'Learning epochs']
}
]
}
......
]
}
```
</details>
<details>
<summary>Example-2 of multiple images and subcaptions</summary>
"......" stands for omitted parts.

```
{
'src': 'arXiv_src_0309_001/quant-ph0309051',
'meta':
{
'meta_from_kaggle': {'journey': '', 'license': '', 'categories': 'quant-ph'},
'meta_from_s2': {'citationCount': 9, 'influentialCitationCount': 1, 'publicationTypes': ['JournalArticle']}
},
'arxiv_id': 'quant-ph/0309051',
'title': 'Implementing a Quantum Algorithm with Exchange-Coupled Quantum Dots: a Feasibility study.',
'abstract': '\nWe present Monte Carlo wavefunction simulations for quantum computations employing an exchange-coupled array of quantum dots. Employing a combination of experimentally and theoretically available parameters, we find that gate fidelities greater than 98 \\% may be obtained with current experimental and technological capabilities. Application to an encoded 3 qubit\n(nine physical qubits) Deutsch-Josza computation indicates that the algorithmic fidelity is more a question of the total time to implement the gates than of the physical complexity of those gates.\n',
'caption_images':
[
......
{
'caption': 'Representation of analytic sequence of local transformations that transform the 19-exchange sequence $U_{cnot}^{exchange}$ from Ref. {divincenzo00} into the true CNOT in the computational basis. The exchange gates and times corresponding to the elementary local transformations are then obtained using the quaternion representation of the desired $SU(2)$ unitaries (see Appendix <ref> for details).',
'cil_pairs':
[
{
'sub_caption': 'A single qubit gate ($\\frac{\\sqrt{3}}{2}-\\frac{i}{2}\\sigma_y$) acting on the second logical qubit diagonalizes the 19-gate exchange sequence. The resulting diagonal 4-by-4 matrix is then converted into the C-PHASE by $\\sigma_z$-rotations acting on both the first and the second qubit, with angles $\\phi=0.612497$ and $\\theta=-0.547580$, respectively. These values are determined from the analytic solutions to a linear equation system with 3 unknowns: $\\phi$, $\\theta$ and a global phase. See Appendix <ref> for details as to how these parameters were obtained.',
'image_file': 'arXiv_src_0309_001/quant-ph0309051_4.jpg',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2016x493 at 0x7F102471EF70>,
'image_ocr': ['Exch,', '7', 'C', '2', '+', '2', '2', 'CNOT', '2', '2', 'PHASE']
},
{
'sub_caption': 'The C-PHASE gate can be transformed into the CNOT gate by acting with Hadamard gates on the second qubit before and after the C-PHASE gate.',
'image_file': 'arXiv_src_0309_001/quant-ph0309051_5.jpg',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2016x411 at 0x7F102471EDC0>,
'image_ocr': ['C', '2', 'PHASE']
}
]
},
......
]
}
```
</details>
### Dataset Summary
The ArxivCap dataset consists of 6.4 million images and 3.9 million captions with 193 million words from 570k academic papers accompanied with abstracts and titles. (papers before **June 2023**)
### Curation Process
Refer to our paper for the curation and filter process.
## Dataset Structure
### Data Loading
```python
from datasets import load_dataset
dataset = load_dataset("MMInstruction/ArxivCap")
dataset["train"] # list of dictionaries
```
---
```bash
# for quick download in linux
set -e
sudo apt-get install git-lfs -y
git clone https://huggingface.co/datasets/MMInstruction/ArxivCap
cd ArxivCap/data
```
```python
# then you can load the parquet files in python use something like
data = load_dataset(
"parquet",
data_files="/path/to/parquet/arXiv_src_9912_001.parquet"
)
```
### Data Fields
One record refers to one paper:
- src: **String**. "\<Arxiv Tar File Name>/\<Folder Name in Tar File>"e.g. "arXiv_src_2112_060/2112.08947"
- arxiv_id: **String**. Arxiv id of the paper, e.g. "2112.08947"
- title: **String**. Title of the paper.
- abstract: **String**. Abstract of the paper.
- meta:
- meta_from_kaggle: refers to [arXiv Dataset](https://www.kaggle.com/datasets/Cornell-University/arxiv)
- journey: **String**. Information about the journal the paper was published in.
- licence: **String**. License for the paper.
- categories: **String**. Categories / tags in the ArXiv system.
- meta_from_s2: refers to [SEMANTIC SCHOLAR](https://api.semanticscholar.org/api-docs/#tag/Paper-Data/operation/get_graph_get_paper)
- citationCount: **Integer**. Total number of citations S2 has found for this paper
- influentialCitationCount: **Integer**. Refers [here](https://www.semanticscholar.org/faq#influential-citations)
- publicationTypes: **List[String]**. Journal Article, Conference, Review, etc.
- caption_images:
- caption: **String**. Main caption.
- cil_pairs:
- sub_caption: **String**. Subcaption for the image.
- image_file: **String**. Unique file name for the image.
- image: **PIL.Image.Image**. A PIL.Image.Image object containing the image.
- image_ocr: **List[String]**. OCR result for the image using [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
```python
import datasets
features = datasets.Features(
{
"src": datasets.Value("string"),
"arxiv_id": datasets.Value("string"),
"title": datasets.Value("string"),
"abstract": datasets.Value("string"),
"meta": {
"meta_from_kaggle": {
"journey": datasets.Value("string"),
"license": datasets.Value("string"),
"categories": datasets.Value("string"),
},
"meta_from_s2": {
"citationCount": datasets.Value("int32"),
"influentialCitationCount": datasets.Value("int32"),
"publicationTypes": [datasets.Value("string")],
}
},
"caption_images": [{
"caption": datasets.Value("string"),
"cil_pairs": [{
"sub_caption": datasets.Value("string"),
"image_file": datasets.Value("string"),
"image": datasets.Image(),
"image_ocr": [datasets.Value("string")],
}]
}]
}
)
```
## Additional Information
### Licensing Information
ArxivCap is released under [CC BY-NC-SA 4.0](http://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
@inproceedings{li-etal-2024-multimodal-arxiv,
title = "Multimodal {A}r{X}iv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models",
author = "Li, Lei and
Wang, Yuqi and
Xu, Runxin and
Wang, Peiyi and
Feng, Xiachong and
Kong, Lingpeng and
Liu, Qi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.775",
doi = "10.18653/v1/2024.acl-long.775",
pages = "14369--14387"
}
``` |
DFKI-SLT/argmicro | DFKI-SLT | "2025-03-10T15:29:52Z" | 12,178 | 0 | [
"language:en",
"language:de",
"license:cc-by-nc-sa-4.0",
"size_categories:n<1K",
"region:us"
] | null | "2023-08-08T16:17:53Z" | ---
license: cc-by-nc-sa-4.0
language:
- en
- de
pretty_name: argmicro
size_categories:
- n<1K
---
# Dataset Card for "argmicro"
### Dataset Summary
The arg-microtexts corpus features 112 short argumentative texts. All texts were originally written in German and have been professionally translated to English.
Based on Freeman’s theory of the macro-structure of arguments ([1991](https://api.pageplace.de/preview/DT0400.9783110875843_A19822678/preview-9783110875843_A19822678.pdf); [2011](https://link.springer.com/book/10.1007/978-94-007-0357-5)) and Toulmin ([2003](https://www.cambridge.org/core/books/uses-of-argument/26CF801BC12004587B66778297D5567C))'s diagramming techniques, ArgMicro consists of `pro` (proponent) and `opp` (opponent) components and six types of relations: `seg` (segment), `add` (addition), `exa` (example), `reb` (rebut), `sup` (support), and `und` (undercut). It also introduced segment-based spans, which also contain non-argumentative parts, in order to cover the whole text.
### Supported Tasks and Leaderboards
- **Tasks:** Structure Prediction, Relation Identification, Central Claim Identification, Role Classification, Function Classification
- **Leaderboards:** \[More Information Needed\]
### Languages
German, with English translation (by a professional translator).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 2.89 MB
```
{
"id": "micro_b001",
"topic_id": "waste_separation",
"stance": 1,
"text": "Yes, it's annoying and cumbersome to separate your rubbish properly all the time. Three different bin bags stink away in the kitchen and have to be sorted into different wheelie bins. But still Germany produces way too much rubbish and too many resources are lost when what actually should be separated and recycled is burnt. We Berliners should take the chance and become pioneers in waste separation!",
"edus": {
"id": ["e1", "e2", "e3", "e4", "e5"],
"start": [0, 82, 184, 232, 326],
"end": [81, 183, 231, 325, 402]
},
"adus": {
"id": ["a1", "a2", "a3", "a4", "a5"],
"type": [0, 0, 1, 1, 1]
},
"edges": {
"id": ["c1", "c10", "c2", "c3", "c4", "c6", "c7", "c8", "c9"],
"src": ["a1", "e5", "a2", "a3", "a4", "e1", "e2", "e3", "e4"],
"trg": ["a5", "a5", "a1", "c1", "c3", "a1", "a2", "a3", "a4"],
"type": [4, 0, 1, 5, 3, 0, 0, 0, 0]
}
}
```
### Data Fields
- `id`: the instance `id` of the document, a `string` feature
- `topic_id`: the topic of the document, a `string` feature (see [list of topics](https://huggingface.co/datasets/DFKI-SLT/argmicro/blob/main/topics_triggers.md))
- `stance`: the index of stance on the topic, an `int` feature (see [stance labels](https://huggingface.co/datasets/DFKI-SLT/argmicro/blob/main/argmicro.py#L35))
- `text`: the text content of the document, a `string` feature
- `edus`: elementary discourse units; a segmented span of text (see the authors' further [explanation](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd#L17-L20))
- `id`: the instance `id` of EDUs, a list of `string` feature
- `start`: the indices indicating the inclusive start of the spans, a list of `int` feature
- `end`: the indices indicating the exclusive end of the spans, a list of `int` feature
- `adus`: argumentative discourse units; argumentatively relevant claims built on EDUs (see the authors' further [explanation](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd#L22-L28))
- `id`: the instance `id` of ADUs, a list of `string` feature
- `type`: the indices indicating the ADU type, a list of `int` feature (see [type list](https://huggingface.co/datasets/DFKI-SLT/argmicro/blob/main/argmicro.py#L36))
- `edges`: the relations between `adus` or `adus` and other `edges` (see the authors' further [explanation](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd#L39-L47))
- `id`: the instance `id` of edges, a list of `string` feature
- `src`: the `id` of `adus` indicating the source element in a relation, a list of `string` feature
- `trg`: the `id` of `adus` or `edges` indicating the target element in a relation, a list of `string` feature
- `type`: the indices indicating the edge type, a list of `int` feature (see [type list](https://huggingface.co/datasets/DFKI-SLT/argmicro/blob/main/argmicro.py#L37))
### Data Splits
| | train |
| -------------------------------------- | ----: |
| No. of instances | 112 |
| No. of sentences/instance (on average) | 5.1 |
### Data Labels
#### Stance
| Stance | Count | Percentage |
| ----------- | ----: | ---------: |
| `pro` | 46 | 41.1 % |
| `con` | 42 | 37.5 % |
| `unclear` | 1 | 0.9 % |
| `UNDEFINED` | 23 | 20.5 % |
- `pro`: yes, in favour of the proposed issue
- `con`: no, against the proposed issue
- `unclear`: the position of the author is unclear
- `UNDEFINED`: no stance label assigned
See [stances types](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd#L74-L83).
#### ADUs
| ADUs | Count | Percentage |
| ----- | ----: | ---------: |
| `pro` | 451 | 78.3 % |
| `opp` | 125 | 21.7 % |
- `pro`: proponent, who presents and defends his claims
- `opp`: opponent, who critically questions the proponent in a regimented fashion (Peldszus, 2015, p.5)
#### Relations
| Relations | Count | Percentage |
| -------------- | ----: | ---------: |
| support: `sup` | 281 | 55.2 % |
| support: `exa` | 9 | 1.8 % |
| attack: `und` | 65 | 12.8 % |
| attack: `reb` | 110 | 21.6 % |
| other: `joint` | 44 | 8.6 % |
- `sup`: support (ADU->ADU)
- `exa`: support by example (ADU->ADU)
- `add`: additional source, for combined/convergent arguments with multiple premises, i.e., linked support, convergent support, serial support (ADU->ADU)
- `reb`: rebutting attack (ADU->ADU)
- definition: "targeting another node and thereby challenging its acceptability"
- `und`: undercutting attack (ADU->Edge)
- definition: "targeting an edge and thereby challenging the acceptability of the inference from the source to the target node"
([P&S, 2016](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd); [EN annotation guideline](https://www.ling.uni-potsdam.de/~stede/Papers/ArgGuidelinesEnglish.pdf))
- `joint`: combines text segments if one does not express a complete proposition on its own, or if the author divides a clause/sentence into parts, using punctuation
See other corpus statistics in Peldszus (2015), Section 5.
#### Example

(Peldszus & Stede, 2015, p. 940, Figure 1)

## Dataset Creation
This section is composed of information and excerpts provided in Peldszus ([2015](https://peldszus.github.io/files/eca2015-preprint.pdf)).
### Curation Rationale
"Argumentation can, for theoretical purposes, be studied on the basis of carefully constructed examples that illustrate specific phenomena...\[We\] address this need by making a resource publicly available that is designed to fill a particular gap." (pp. 2-3)
### Source Data
23 texts were written by the authors as a “proof of concept” for the idea. These texts also have been used as examples in teaching and testing argumentation analysis with students.
90 texts have been collected in a controlled text generation experiment, where normal competent language users wrote short texts of controlled linguistic and rhetoric complexity.
#### Initial Data Collection and Normalization
"Our contribution is a collection of 112 “microtexts” that have been written in response to trigger questions, mostly in the form of “Should one do X”. The texts are short but at the same time “complete” in that they provide a standpoint and a justification, by necessity in a fairly dense form." (p.2)
"The probands were asked to first gather a list with the pros and cons of the trigger question, then take stance for one side and argue for it on the basis of their reflection in a short argumentative text. Each text was to fulfill three requirements: It should be about five segments long; all segments should be argumentatively relevant, either formulating the main claim of the text, supporting the main claim or another segment, or attacking the main claim or another segment. Also, the probands were asked that at least one possible objection to the claim should be considered in the text. Finally, the text should be written in such a way that it would be understandable without having its trigger question as a headline." (p.3)
"\[A\]ll texts have been corrected for spelling and grammar errors...Their segmentation was corrected when necessary...some modifications in the remaining segments to maintain text coherence, which we made as minimal as possible." (p.4)
"We thus constrained the translation to preserve the segmentation of the text on the one hand (effectively ruling out phrasal translations of clause-type segments) and to preserve its linearization on the other hand (disallowing changes to the order of appearance of arguments)." (p.5)
#### Who are the source language producers?
The texts with ids b001-b064 and k001-k031 have been collected in a controlled text generation experiment from 23 subjects discussing various controversial issues from a fixed list. All probands were native speakers of
German, of varying age, education and profession.
The texts with ids d01-d23 have been written by Andreas Peldszus, the author.
### Annotations
#### Annotation process
All texts are annotated with argumentation structures, following the scheme proposed in Peldszus & Stede ([2013](https://www.ling.uni-potsdam.de/~peldszus/ijcini2013-preprint.pdf)). For inter-annotator-agreement scores see Peldszus (2014). The (German) annotation guidelines are published in Peldszus, Warzecha, Stede (2016). See the annotation guidelines ([de](https://www.ling.uni-potsdam.de/~stede/Papers/ArgGuidelinesGerman.pdf), [en](https://www.ling.uni-potsdam.de/~stede/Papers/ArgGuidelinesEnglish.pdf)), and the [annotation schemes](https://github.com/peldszus/arg-microtexts/blob/master/corpus/arggraph.dtd).
"\[T\]he markup of argumentation structures in the full corpus was done by one expert annotator. All annotations have been checked, controversial instances have been discussed in a reconciliation phase by two or more expert annotators...The annotation of the corpus was originally done manually on paper. In follow-up annotations, we used GraPAT ([Sonntag & Stede, 2014](http://www.lrec-conf.org/proceedings/lrec2014/pdf/824_Paper.pdf))." (p.7)
#### Who are the annotators?
\[More Information Needed\]
### Personal and Sensitive Information
\[More Information Needed\]
## Considerations for Using the Data
### Social Impact of Dataset
"Automatic argumentation recognition has many possible applications, including improving document summarization (Teufel and Moens, 2002), retrieval capabilities of legal databases (Palau and Moens, 2011), opinion mining for commercial purposes, or also as a tool for assessing public
opinion on political questions.
"...\[W\]e suggest there is yet one resource missing that could facilitate the development of automatic argumentation recognition systems: Short texts with explicit argumentation, little argumentatively irrelevant material, less rhetorical gimmicks (or even deception), in clean written language."
(Peldszus, [2014](https://aclanthology.org/W14-2112.pdf), p. 88)
### Discussion of Biases
\[More Information Needed\]
### Other Known Limitations
\[More Information Needed\]
## Additional Information
### Dataset Curators
\[More Information Needed\]
### Licensing Information
The arg-microtexts corpus is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. (see [license agreement](https://creativecommons.org/licenses/by-nc-sa/4.0/))
### Citation Information
```
@inproceedings{peldszus2015annotated,
title={An annotated corpus of argumentative microtexts},
author={Peldszus, Andreas and Stede, Manfred},
booktitle={Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon},
volume={2},
pages={801--815},
year={2015}
}
```
```
@inproceedings{peldszus2014towards,
title={Towards segment-based recognition of argumentation structure in short texts},
author={Peldszus, Andreas},
booktitle={Proceedings of the First Workshop on Argumentation Mining},
pages={88--97},
year={2014}
}
```
### Contributions
Thanks to [@idalr](https://github.com/idalr) for adding this dataset.
|
facebook/voxpopuli | facebook | "2022-10-14T13:43:12Z" | 12,164 | 107 | [
"task_categories:automatic-speech-recognition",
"multilinguality:multilingual",
"language:en",
"language:de",
"language:fr",
"language:es",
"language:pl",
"language:it",
"language:ro",
"language:hu",
"language:cs",
"language:nl",
"language:fi",
"language:hr",
"language:sk",
"language:sl",
"language:et",
"language:lt",
"license:cc0-1.0",
"license:other",
"size_categories:100K<n<1M",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2101.00390",
"region:us"
] | [
"automatic-speech-recognition"
] | "2022-05-10T14:42:49Z" | ---
annotations_creators: []
language:
- en
- de
- fr
- es
- pl
- it
- ro
- hu
- cs
- nl
- fi
- hr
- sk
- sl
- et
- lt
language_creators: []
license:
- cc0-1.0
- other
multilinguality:
- multilingual
pretty_name: VoxPopuli
size_categories: []
source_datasets: []
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for Voxpopuli
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/facebookresearch/voxpopuli
- **Repository:** https://github.com/facebookresearch/voxpopuli
- **Paper:** https://arxiv.org/abs/2101.00390
- **Point of Contact:** [[email protected]](mailto:[email protected]), [[email protected]](mailto:[email protected]), [[email protected]](mailto:[email protected])
### Dataset Summary
VoxPopuli is a large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.
The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home). We acknowledge the European Parliament for creating and sharing these materials.
This implementation contains transcribed speech data for 18 languages.
It also contains 29 hours of transcribed speech data of non-native English intended for research in ASR for accented speech (15 L2 accents)
### Example usage
VoxPopuli contains labelled data for 18 languages. To load a specific language pass its name as a config name:
```python
from datasets import load_dataset
voxpopuli_croatian = load_dataset("facebook/voxpopuli", "hr")
```
To load all the languages in a single dataset use "multilang" config name:
```python
voxpopuli_all = load_dataset("facebook/voxpopuli", "multilang")
```
To load a specific set of languages, use "multilang" config name and pass a list of required languages to `languages` parameter:
```python
voxpopuli_slavic = load_dataset("facebook/voxpopuli", "multilang", languages=["hr", "sk", "sl", "cs", "pl"])
```
To load accented English data, use "en_accented" config name:
```python
voxpopuli_accented = load_dataset("facebook/voxpopuli", "en_accented")
```
**Note that L2 English subset contains only `test` split.**
### Supported Tasks and Leaderboards
* automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
Accented English subset can also be used for research in ASR for accented speech (15 L2 accents)
### Languages
VoxPopuli contains labelled (transcribed) data for 18 languages:
| Language | Code | Transcribed Hours | Transcribed Speakers | Transcribed Tokens |
|:---:|:---:|:---:|:---:|:---:|
| English | En | 543 | 1313 | 4.8M |
| German | De | 282 | 531 | 2.3M |
| French | Fr | 211 | 534 | 2.1M |
| Spanish | Es | 166 | 305 | 1.6M |
| Polish | Pl | 111 | 282 | 802K |
| Italian | It | 91 | 306 | 757K |
| Romanian | Ro | 89 | 164 | 739K |
| Hungarian | Hu | 63 | 143 | 431K |
| Czech | Cs | 62 | 138 | 461K |
| Dutch | Nl | 53 | 221 | 488K |
| Finnish | Fi | 27 | 84 | 160K |
| Croatian | Hr | 43 | 83 | 337K |
| Slovak | Sk | 35 | 96 | 270K |
| Slovene | Sl | 10 | 45 | 76K |
| Estonian | Et | 3 | 29 | 18K |
| Lithuanian | Lt | 2 | 21 | 10K |
| Total | | 1791 | 4295 | 15M |
Accented speech transcribed data has 15 various L2 accents:
| Accent | Code | Transcribed Hours | Transcribed Speakers |
|:---:|:---:|:---:|:---:|
| Dutch | en_nl | 3.52 | 45 |
| German | en_de | 3.52 | 84 |
| Czech | en_cs | 3.30 | 26 |
| Polish | en_pl | 3.23 | 33 |
| French | en_fr | 2.56 | 27 |
| Hungarian | en_hu | 2.33 | 23 |
| Finnish | en_fi | 2.18 | 20 |
| Romanian | en_ro | 1.85 | 27 |
| Slovak | en_sk | 1.46 | 17 |
| Spanish | en_es | 1.42 | 18 |
| Italian | en_it | 1.11 | 15 |
| Estonian | en_et | 1.08 | 6 |
| Lithuanian | en_lt | 0.65 | 7 |
| Croatian | en_hr | 0.42 | 9 |
| Slovene | en_sl | 0.25 | 7 |
## Dataset Structure
### Data Instances
```python
{
'audio_id': '20180206-0900-PLENARY-15-hr_20180206-16:10:06_5',
'language': 11, # "hr"
'audio': {
'path': '/home/polina/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/train_part_0/20180206-0900-PLENARY-15-hr_20180206-16:10:06_5.wav',
'array': array([-0.01434326, -0.01055908, 0.00106812, ..., 0.00646973], dtype=float32),
'sampling_rate': 16000
},
'raw_text': '',
'normalized_text': 'poast genitalnog sakaenja ena u europi tek je jedna od manifestacija takve tetne politike.',
'gender': 'female',
'speaker_id': '119431',
'is_gold_transcript': True,
'accent': 'None'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `language` (datasets.ClassLabel) - numerical id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `raw_text` (string) - original (orthographic) audio segment text
* `normalized_text` (string) - normalized audio segment transcription
* `gender` (string) - gender of speaker
* `speaker_id` (string) - id of speaker
* `is_gold_transcript` (bool) - ?
* `accent` (string) - type of accent, for example "en_lt", if applicable, else "None".
### Data Splits
All configs (languages) except for accented English contain data in three splits: train, validation and test. Accented English `en_accented` config contains only test split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home)
#### Initial Data Collection and Normalization
The VoxPopuli transcribed set comes from aligning the full-event source speech audio with the transcripts for plenary sessions. Official timestamps
are available for locating speeches by speaker in the full session, but they are frequently inaccurate, resulting in truncation of the speech or mixture
of fragments from the preceding or the succeeding speeches. To calibrate the original timestamps,
we perform speaker diarization (SD) on the full-session audio using pyannote.audio (Bredin et al.2020) and adopt the nearest SD timestamps (by L1 distance to the original ones) instead for segmentation.
Full-session audios are segmented into speech paragraphs by speaker, each of which has a transcript available.
The speech paragraphs have an average duration of 197 seconds, which leads to significant. We hence further segment these paragraphs into utterances with a
maximum duration of 20 seconds. We leverage speech recognition (ASR) systems to force-align speech paragraphs to the given transcripts.
The ASR systems are TDS models (Hannun et al., 2019) trained with ASG criterion (Collobert et al., 2016) on audio tracks from in-house deidentified video data.
The resulting utterance segments may have incorrect transcriptions due to incomplete raw transcripts or inaccurate ASR force-alignment.
We use the predictions from the same ASR systems as references and filter the candidate segments by a maximum threshold of 20% character error rate(CER).
#### Who are the source language producers?
Speakers are participants of the European Parliament events, many of them are EU officials.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Gender speakers distribution is imbalanced, percentage of female speakers is mostly lower than 50% across languages, with the minimum of 15% for the Lithuanian language data.
VoxPopuli includes all available speeches from the 2009-2020 EP events without any selections on the topics or speakers.
The speech contents represent the standpoints of the speakers in the EP events, many of which are EU officials.
### Other Known Limitations
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is distributet under CC0 license, see also [European Parliament's legal notice](https://www.europarl.europa.eu/legal-notice/en/) for the raw data.
### Citation Information
Please cite this paper:
```bibtex
@inproceedings{wang-etal-2021-voxpopuli,
title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation",
author = "Wang, Changhan and
Riviere, Morgane and
Lee, Ann and
Wu, Anne and
Talnikar, Chaitanya and
Haziza, Daniel and
Williamson, Mary and
Pino, Juan and
Dupoux, Emmanuel",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.80",
pages = "993--1003",
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
songlab/TraitGym | songlab | "2025-03-25T19:09:05Z" | 12,145 | 6 | [
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dna",
"variant-effect-prediction",
"biology",
"genomics"
] | null | "2025-01-26T23:37:15Z" | ---
license: mit
tags:
- dna
- variant-effect-prediction
- biology
- genomics
configs:
- config_name: "mendelian_traits"
data_files:
- split: test
path: "mendelian_traits_matched_9/test.parquet"
- config_name: "complex_traits"
data_files:
- split: test
path: "complex_traits_matched_9/test.parquet"
- config_name: "mendelian_traits_full"
data_files:
- split: test
path: "mendelian_traits_all/test.parquet"
- config_name: "complex_traits_full"
data_files:
- split: test
path: "complex_traits_all/test.parquet"
---
# 🧬 TraitGym
[Benchmarking DNA Sequence Models for Causal Regulatory Variant Prediction in Human Genetics](https://www.biorxiv.org/content/10.1101/2025.02.11.637758v1)
🏆 Leaderboard: https://huggingface.co/spaces/songlab/TraitGym-leaderboard
## ⚡️ Quick start
- Load a dataset
```python
from datasets import load_dataset
dataset = load_dataset("songlab/TraitGym", "mendelian_traits", split="test")
```
- Example notebook to run variant effect prediction with a gLM, runs in 5 min on Google Colab: `TraitGym.ipynb` [](https://colab.research.google.com/github/songlab-cal/TraitGym/blob/main/TraitGym.ipynb)
## 🤗 Resources (https://huggingface.co/datasets/songlab/TraitGym)
- Datasets: `{dataset}/test.parquet`
- Subsets: `{dataset}/subset/{subset}.parquet`
- Features: `{dataset}/features/{features}.parquet`
- Predictions: `{dataset}/preds/{subset}/{model}.parquet`
- Metrics: `{dataset}/{metric}/{subset}/{model}.csv`
`dataset` examples (`load_dataset` config name):
- `mendelian_traits_matched_9` (`mendelian_traits`)
- `complex_traits_matched_9` (`complex_traits`)
- `mendelian_traits_all` (`mendelian_traits_full`)
- `complex_traits_all` (`complex_traits_full`)
`subset` examples:
- `all` (default)
- `3_prime_UTR_variant`
- `disease`
- `BMI`
`features` examples:
- `GPN-MSA_LLR`
- `GPN-MSA_InnerProducts`
- `Borzoi_L2`
`model` examples:
- `GPN-MSA_LLR.minus.score`
- `GPN-MSA.LogisticRegression.chrom`
- `CADD+GPN-MSA+Borzoi.LogisticRegression.chrom`
`metric` examples:
- `AUPRC_by_chrom_weighted_average` (main metric)
- `AUPRC`
## 💻 Code (https://github.com/songlab-cal/TraitGym)
- Tries to follow [recommended Snakemake structure](https://snakemake.readthedocs.io/en/stable/snakefiles/deployment.html)
- GPN-Promoter code is in [the main GPN repo](https://github.com/songlab-cal/gpn)
### Installation
First, clone the repo and `cd` into it.
Second, install the dependencies:
```bash
conda env create -f workflow/envs/general.yaml
conda activate TraitGym
```
Optionally, download precomputed datasets and predictions (6.7G):
```bash
mkdir -p results/dataset
huggingface-cli download songlab/TraitGym --repo-type dataset --local-dir results/dataset/
```
### Running
To compute a specific result, specify its path:
```bash
snakemake --cores all <path>
```
Example paths (these are already computed):
```bash
# zero-shot LLR
results/dataset/complex_traits_matched_9/AUPRC_by_chrom_weighted_average/all/GPN-MSA_absLLR.plus.score.csv
# logistic regression/linear probing
results/dataset/complex_traits_matched_9/AUPRC_by_chrom_weighted_average/all/GPN-MSA.LogisticRegression.chrom.csv
```
We recommend the following:
```bash
# Snakemake sometimes gets confused about which files it needs to rerun and this forces
# not to rerun any existing file
snakemake --cores all <path> --touch
# to output an execution plan
snakemake --cores all <path> --dry-run
```
To evaluate your own set of model features, place a dataframe of shape `n_variants,n_features` in `results/dataset/{dataset}/features/{features}.parquet`.
For zero-shot evaluation of column `{feature}` and sign `{sign}` (`plus` or `minus`), you would invoke:
```bash
snakemake --cores all results/dataset/{dataset}/{metric}/all/{features}.{sign}.{feature}.csv
```
To train and evaluate a logistic regression model, you would invoke:
```bash
snakemake --cores all results/dataset/{dataset}/{metric}/all/{feature_set}.LogisticRegression.chrom.csv
```
where `{feature_set}` should first be defined in `feature_sets` in `config/config.yaml` (this allows combining features defined in different files).
## Citation
[Link to paper](https://www.biorxiv.org/content/10.1101/2025.02.11.637758v2)
```bibtex
@article{traitgym,
title={Benchmarking DNA Sequence Models for Causal Regulatory Variant Prediction in Human Genetics},
author={Benegas, Gonzalo and Eraslan, G{\"o}kcen and Song, Yun S},
journal={bioRxiv},
pages={2025--02},
year={2025},
publisher={Cold Spring Harbor Laboratory}
}
``` |
luulinh90s/chm-corr-prj-giang | luulinh90s | "2024-07-06T14:42:17Z" | 12,113 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-10-03T01:26:35Z" | ---
license: mit
---
|
fka/awesome-chatgpt-prompts | fka | "2025-01-06T00:02:53Z" | 12,111 | 7,643 | [
"task_categories:question-answering",
"license:cc0-1.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ChatGPT"
] | [
"question-answering"
] | "2022-12-13T23:47:45Z" | ---
license: cc0-1.0
tags:
- ChatGPT
task_categories:
- question-answering
size_categories:
- 100K<n<1M
---
<p align="center"><h1>🧠 Awesome ChatGPT Prompts [CSV dataset]</h1></p>
This is a Dataset Repository of **Awesome ChatGPT Prompts**
**[View All Prompts on GitHub](https://github.com/f/awesome-chatgpt-prompts)**
# License
CC-0 |
fleaven/Retargeted_AMASS_for_robotics | fleaven | "2025-02-21T14:16:52Z" | 12,084 | 5 | [
"task_categories:robotics",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"region:us",
"AMASS",
"Retarget",
"Robotics",
"Humanoid"
] | [
"robotics"
] | "2025-01-25T04:25:24Z" | ---
license: cc-by-4.0
task_categories:
- robotics
language:
- en
tags:
- AMASS
- Retarget
- Robotics
- Humanoid
pretty_name: Retargeted AMASS for Robotics
size_categories:
- 10K<n<100K
---
# Retargeted AMASS for Robotics
## Project Overview
This project aims to retarget motion data from the AMASS dataset to various robot models and open-source the retargeted data to facilitate research and applications in robotics and human-robot interaction. AMASS (Archive of Motion Capture as Surface Shapes) is a high-quality human motion capture dataset, and the SMPL-X model is a powerful tool for generating realistic human motion data.
By adapting the motion data from AMASS to different robot models, we hope to provide a more diverse and accessible motion dataset for robot training and human-robot interaction.
## Dataset Content
This open-source project includes the following:
1. **Retargeted Motions**: Motion files retargeted from AMASS to various robot models.
- **Unitree G1**:
<iframe src="//player.bilibili.com/player.html?bvid=BV1zd6iYkEZ2&page=1&high_quality=1&danmaku=0" allowfullscreen="allowfullscreen" width="100%" height="500" scrolling="no" frameborder="0" sandbox="allow-top-navigation allow-same-origin allow-forms allow-scripts"></iframe>
The retargeted motions for the Unitree G1 robot are generated based on the official open-source model provided by Unitree.
https://github.com/unitreerobotics/unitree_ros/blob/master/robots/g1_description/g1_29dof_rev_1_0.xml
The joint positions comply with the constraints defined in the XML file.
data shape:[-1,36]
0:3 root world position
3:7 root quaternion rotation, order: xyzw
7:36 joint positions
joint order:
```txt
left_hip_pitch_joint
left_hip_roll_joint
left_hip_yaw_joint
left_knee_joint
left_ankle_pitch_joint
left_ankle_roll_joint
right_hip_pitch_joint
right_hip_roll_joint
right_hip_yaw_joint
right_knee_joint
right_ankle_pitch_joint
right_ankle_roll_joint
waist_yaw_joint
waist_roll_joint
waist_pitch_joint
left_shoulder_pitch_joint
left_shoulder_roll_joint
left_shoulder_yaw_joint
left_elbow_joint
left_wrist_roll_joint
left_wrist_pitch_joint
left_wrist_yaw_joint
right_shoulder_pitch_joint
right_shoulder_roll_joint
right_shoulder_yaw_joint
right_elbow_joint
right_wrist_roll_joint
right_wrist_pitch_joint
right_wrist_yaw_joint
```
- **Others**: Future Updates
2. **Usage Examples**: Code examples on how to use the retargeted data.
./g1/visualize.py
3. **License Files**: Original license information for each sub-dataset within AMASS.
## License
The retargeted data in this project is derived from the AMASS dataset and therefore adheres to the original license terms of AMASS. Each sub-dataset within AMASS may have different licenses, so please ensure compliance with the following requirements when using the data:
- **Propagate Original Licenses**: When using or distributing the retargeted data, you must include and comply with the original licenses of the sub-datasets within AMASS.
- **Attribution Requirements**: Properly cite this work and the original authors and sources of the AMASS dataset and its sub-datasets.
For detailed license information, please refer to the `LICENSE` file in this project.
## Acknowledgments
This project is built on the AMASS dataset and the SMPL-X model. Special thanks to the research team at the Max Planck Institute for Intelligent Systems for providing this valuable resource.
## Citation
If you use the data or code from this project, please cite this work and relevant papers for AMASS and SMPL-X:
```bibtex
@misc{Retargeted_AMASS_R,
title={Retargeted AMASS for Robotics},
author={Kun Zhao},
url={https://huggingface.co/datasets/fleaven/Retargeted_AMASS_for_robotics}
}
@inproceedings{AMASS2019,
title={AMASS: Archive of Motion Capture as Surface Shapes},
author={Mahmood, Naureen and Ghorbani, Nima and Troje, Nikolaus F. and Pons-Moll, Gerard and Black, Michael J.},
booktitle={International Conference on Computer Vision (ICCV)},
year={2019}
}
@inproceedings{SMPL-X2019,
title={Expressive Body Capture: 3D Hands, Face, and Body from a Single Image},
author={Pavlakos, Georgios and Choutas, Vasileios and Ghorbani, Nima and Bolkart, Timo and Osman, Ahmed A. A. and Tzionas, Dimitrios and Black, Michael J.},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019}
}
```
## Contact
For any questions or suggestions, please contact:
- **Kun Zhao**: [email protected]
For more information, follow my Xiaohongshu and Bilibili:
https://www.xiaohongshu.com/user/profile/60cdc5360000000001007e33
https://space.bilibili.com/678369952 |
haonan-li/cmmlu | haonan-li | "2023-07-13T10:19:29Z" | 12,054 | 66 | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"language:zh",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2306.09212",
"region:us",
"chinese",
"llm",
"evaluation"
] | [
"multiple-choice",
"question-answering"
] | "2023-06-25T16:37:44Z" | ---
license: cc-by-nc-4.0
task_categories:
- multiple-choice
- question-answering
language:
- zh
tags:
- chinese
- llm
- evaluation
pretty_name: CMMLU
size_categories:
- 10K<n<100K
---
# CMMLU: Measuring massive multitask language understanding in Chinese
- **Homepage:** [https://github.com/haonan-li/CMMLU](https://github.com/haonan-li/CMMLU)
- **Repository:** [https://huggingface.co/datasets/haonan-li/cmmlu](https://huggingface.co/datasets/haonan-li/cmmlu)
- **Paper:** [CMMLU: Measuring Chinese Massive Multitask Language Understanding](https://arxiv.org/abs/2306.09212).
## Table of Contents
- [Introduction](#introduction)
- [Leaderboard](#leaderboard)
- [Data](#data)
- [Citation](#citation)
- [License](#license)
## Introduction
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context.
CMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels. It includes subjects that require computational expertise, such as physics and mathematics, as well as disciplines within humanities and social sciences.
Many of these tasks are not easily translatable from other languages due to their specific contextual nuances and wording.
Furthermore, numerous tasks within CMMLU have answers that are specific to China and may not be universally applicable or considered correct in other regions or languages.
## Leaderboard
Latest leaderboard is in our [github](https://github.com/haonan-li/CMMLU).
## Data
We provide development and test dataset for each of 67 subjects, with 5 questions in development set and 100+ quesitons in test set.
Each question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer.
Here are two examples:
```
题目:同一物种的两类细胞各产生一种分泌蛋白,组成这两种蛋白质的各种氨基酸含量相同,但排列顺序不同。其原因是参与这两种蛋白质合成的:
A. tRNA种类不同
B. 同一密码子所决定的氨基酸不同
C. mRNA碱基序列不同
D. 核糖体成分不同
答案是:C
```
```
题目:某种植物病毒V是通过稻飞虱吸食水稻汁液在水稻间传播的。稻田中青蛙数量的增加可减少该病毒在水稻间的传播。下列叙述正确的是:
A. 青蛙与稻飞虱是捕食关系
B. 水稻和病毒V是互利共生关系
C. 病毒V与青蛙是寄生关系
D. 水稻与青蛙是竞争关系
答案是:
```
#### Load data
```python
from datasets import load_dataset
cmmlu=load_dataset(r"haonan-li/cmmlu", 'agronomy')
print(cmmlu['test'][0])
```
#### Load all data at once
```python
task_list = ['agronomy', 'anatomy', 'ancient_chinese', 'arts', 'astronomy', 'business_ethics', 'chinese_civil_service_exam', 'chinese_driving_rule', 'chinese_food_culture', 'chinese_foreign_policy', 'chinese_history', 'chinese_literature',
'chinese_teacher_qualification', 'clinical_knowledge', 'college_actuarial_science', 'college_education', 'college_engineering_hydrology', 'college_law', 'college_mathematics', 'college_medical_statistics', 'college_medicine', 'computer_science',
'computer_security', 'conceptual_physics', 'construction_project_management', 'economics', 'education', 'electrical_engineering', 'elementary_chinese', 'elementary_commonsense', 'elementary_information_and_technology', 'elementary_mathematics',
'ethnology', 'food_science', 'genetics', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_geography', 'high_school_mathematics', 'high_school_physics', 'high_school_politics', 'human_sexuality',
'international_law', 'journalism', 'jurisprudence', 'legal_and_moral_basis', 'logical', 'machine_learning', 'management', 'marketing', 'marxist_theory', 'modern_chinese', 'nutrition', 'philosophy', 'professional_accounting', 'professional_law',
'professional_medicine', 'professional_psychology', 'public_relations', 'security_study', 'sociology', 'sports_science', 'traditional_chinese_medicine', 'virology', 'world_history', 'world_religions']
from datasets import load_dataset
cmmlu = {k: load_dataset(r"haonan-li/cmmlu", k) for k in task_list}
```
## Citation
```
@misc{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},
year={2023},
eprint={2306.09212},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The CMMLU dataset is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|
japanese-asr/whisper_transcriptions.reazon_speech_all | japanese-asr | "2024-09-14T08:02:36Z" | 12,025 | 2 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-07T13:00:19Z" | ---
dataset_info:
- config_name: subset_0
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12059096252.0
num_examples: 82105
download_size: 11943682535
dataset_size: 12059096252.0
- config_name: subset_1
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12030017758.0
num_examples: 82105
download_size: 11915679367
dataset_size: 12030017758.0
- config_name: subset_2
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12050113720.0
num_examples: 82105
download_size: 11935583171
dataset_size: 12050113720.0
- config_name: subset_3
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12080501389.0
num_examples: 82105
download_size: 11965552797
dataset_size: 12080501389.0
- config_name: subset_4
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12018838498.0
num_examples: 82105
download_size: 11904983256
dataset_size: 12018838498.0
- config_name: subset_5
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 554868.0
num_examples: 3
download_size: 556602
dataset_size: 554868.0
- config_name: subset_6
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12018309045.0
num_examples: 82105
download_size: 11905167118
dataset_size: 12018309045.0
- config_name: subset_7
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12021045031.0
num_examples: 82105
download_size: 11907133113
dataset_size: 12021045031.0
- config_name: subset_8
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12011675437.0
num_examples: 82105
download_size: 11899346300
dataset_size: 12011675437.0
- config_name: subset_9
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12105522224.0
num_examples: 82105
download_size: 11991289103
dataset_size: 12105522224.0
- config_name: subset_10
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12073607251.0
num_examples: 82105
download_size: 11958751264
dataset_size: 12073607251.0
- config_name: subset_11
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12078826656.0
num_examples: 82105
download_size: 11963743949
dataset_size: 12078826656.0
- config_name: subset_12
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12015147935.0
num_examples: 82105
download_size: 11901777926
dataset_size: 12015147935.0
- config_name: subset_13
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11998772302.0
num_examples: 82105
download_size: 11886522676
dataset_size: 11998772302.0
- config_name: subset_14
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11960939347.0
num_examples: 81918
download_size: 11849174493
dataset_size: 11960939347.0
- config_name: subset_15
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11950791393.0
num_examples: 81918
download_size: 11836876665
dataset_size: 11950791393.0
- config_name: subset_16
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11962613928.0
num_examples: 81918
download_size: 11849189670
dataset_size: 11962613928.0
- config_name: subset_17
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12034059135.0
num_examples: 81918
download_size: 11919720586
dataset_size: 12034059135.0
- config_name: subset_18
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12029508607.0
num_examples: 81918
download_size: 11915103251
dataset_size: 12029508607.0
- config_name: subset_19
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12020029808.0
num_examples: 81918
download_size: 11905671804
dataset_size: 12020029808.0
- config_name: subset_20
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12034071380.0
num_examples: 81918
download_size: 11918830216
dataset_size: 12034071380.0
- config_name: subset_21
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 421316.0
num_examples: 5
download_size: 424446
dataset_size: 421316.0
- config_name: subset_22
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11974458372.0
num_examples: 81918
download_size: 11859955735
dataset_size: 11974458372.0
- config_name: subset_23
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11974247512.0
num_examples: 81918
download_size: 11859862875
dataset_size: 11974247512.0
- config_name: subset_24
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12011667188.0
num_examples: 81918
download_size: 11896740878
dataset_size: 12011667188.0
- config_name: subset_25
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11902955096.0
num_examples: 81918
download_size: 11790805681
dataset_size: 11902955096.0
- config_name: subset_26
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11922214736.0
num_examples: 81918
download_size: 11809945499
dataset_size: 11922214736.0
- config_name: subset_27
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12026454481.0
num_examples: 81918
download_size: 11911856866
dataset_size: 12026454481.0
- config_name: subset_28
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12004954475.0
num_examples: 81918
download_size: 11891318814
dataset_size: 12004954475.0
- config_name: subset_29
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11978477351.0
num_examples: 81918
download_size: 11865338992
dataset_size: 11978477351.0
- config_name: subset_30
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11996780266.0
num_examples: 81685
download_size: 11868820371
dataset_size: 11996780266.0
- config_name: subset_31
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11999141370.0
num_examples: 81685
download_size: 11870630596
dataset_size: 11999141370.0
- config_name: subset_32
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11996118410.0
num_examples: 81685
download_size: 11868183558
dataset_size: 11996118410.0
- config_name: subset_33
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11958173618.0
num_examples: 81685
download_size: 11831397658
dataset_size: 11958173618.0
- config_name: subset_34
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12038055648.0
num_examples: 81685
download_size: 11909042004
dataset_size: 12038055648.0
- config_name: subset_35
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11979060046.0
num_examples: 81685
download_size: 11851650521
dataset_size: 11979060046.0
- config_name: subset_36
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11991059043.0
num_examples: 81685
download_size: 11864921569
dataset_size: 11991059043.0
- config_name: subset_37
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 851642.0
num_examples: 7
download_size: 854183
dataset_size: 851642.0
- config_name: subset_38
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11966229677.0
num_examples: 81685
download_size: 11838873093
dataset_size: 11966229677.0
- config_name: subset_39
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12032636737.0
num_examples: 81685
download_size: 11905710689
dataset_size: 12032636737.0
- config_name: subset_40
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11995996455.0
num_examples: 81685
download_size: 11869187668
dataset_size: 11995996455.0
- config_name: subset_41
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11948487161.0
num_examples: 81685
download_size: 11821928829
dataset_size: 11948487161.0
- config_name: subset_42
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11982759158.0
num_examples: 81685
download_size: 11854662772
dataset_size: 11982759158.0
- config_name: subset_43
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11995616891.0
num_examples: 81685
download_size: 11868391212
dataset_size: 11995616891.0
- config_name: subset_44
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12035405363.0
num_examples: 81685
download_size: 11903634378
dataset_size: 12035405363.0
- config_name: subset_45
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11961098318.0
num_examples: 81685
download_size: 11833542414
dataset_size: 11961098318.0
- config_name: subset_46
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11934466855.0
num_examples: 81703
download_size: 11809138656
dataset_size: 11934466855.0
- config_name: subset_47
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11986272774.0
num_examples: 81703
download_size: 11859253674
dataset_size: 11986272774.0
- config_name: subset_48
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11992937240.0
num_examples: 81703
download_size: 11866551108
dataset_size: 11992937240.0
- config_name: subset_49
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11956586943.0
num_examples: 81703
download_size: 11827950943
dataset_size: 11956586943.0
- config_name: subset_50
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11988154953.0
num_examples: 81703
download_size: 11861327411
dataset_size: 11988154953.0
- config_name: subset_51
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11990008267.0
num_examples: 81703
download_size: 11862649653
dataset_size: 11990008267.0
- config_name: subset_52
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11985705717.0
num_examples: 81703
download_size: 11857895339
dataset_size: 11985705717.0
- config_name: subset_54
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12033489657.0
num_examples: 81703
download_size: 11905433712
dataset_size: 12033489657.0
- config_name: subset_55
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12039244242.0
num_examples: 81703
download_size: 11911020817
dataset_size: 12039244242.0
- config_name: subset_56
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11990488593.0
num_examples: 81703
download_size: 11863408781
dataset_size: 11990488593.0
- config_name: subset_57
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11988643508.0
num_examples: 81703
download_size: 11861389954
dataset_size: 11988643508.0
- config_name: subset_58
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12006435445.0
num_examples: 81703
download_size: 11878008470
dataset_size: 12006435445.0
- config_name: subset_59
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12035103147.0
num_examples: 81703
download_size: 11906359796
dataset_size: 12035103147.0
- config_name: subset_60
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12042085027.0
num_examples: 81703
download_size: 11914595452
dataset_size: 12042085027.0
- config_name: subset_61
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12046218095.0
num_examples: 81703
download_size: 11918648337
dataset_size: 12046218095.0
- config_name: subset_62
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11962822951.0
num_examples: 81604
download_size: 11847949851
dataset_size: 11962822951.0
- config_name: subset_63
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11965649034.0
num_examples: 81604
download_size: 11851137108
dataset_size: 11965649034.0
- config_name: subset_64
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11924591439.0
num_examples: 81604
download_size: 11811859429
dataset_size: 11924591439.0
- config_name: subset_65
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11896531489.0
num_examples: 81604
download_size: 11786518951
dataset_size: 11896531489.0
- config_name: subset_66
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11929245606.0
num_examples: 81604
download_size: 11815481215
dataset_size: 11929245606.0
- config_name: subset_67
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11973018789.0
num_examples: 81604
download_size: 11860697926
dataset_size: 11973018789.0
- config_name: subset_68
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11977150753.0
num_examples: 81604
download_size: 11862596778
dataset_size: 11977150753.0
- config_name: subset_69
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11996446097.0
num_examples: 81604
download_size: 11882661071
dataset_size: 11996446097.0
- config_name: subset_70
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11960347379.0
num_examples: 81604
download_size: 11847789672
dataset_size: 11960347379.0
- config_name: subset_71
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12013965843.0
num_examples: 81604
download_size: 11898275701
dataset_size: 12013965843.0
- config_name: subset_72
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11980221143.0
num_examples: 81604
download_size: 11866352969
dataset_size: 11980221143.0
- config_name: subset_73
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11932213658.0
num_examples: 81604
download_size: 11818231915
dataset_size: 11932213658.0
- config_name: subset_74
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11998916031.0
num_examples: 81604
download_size: 11884713138
dataset_size: 11998916031.0
- config_name: subset_75
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11954754232.0
num_examples: 81604
download_size: 11839406033
dataset_size: 11954754232.0
- config_name: subset_76
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12003171710.0
num_examples: 81604
download_size: 11886896374
dataset_size: 12003171710.0
- config_name: subset_77
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12003591234.0
num_examples: 81762
download_size: 11890769206
dataset_size: 12003591234.0
- config_name: subset_78
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12045871441.0
num_examples: 81762
download_size: 11930968474
dataset_size: 12045871441.0
- config_name: subset_79
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11985061582.0
num_examples: 81762
download_size: 11871585994
dataset_size: 11985061582.0
- config_name: subset_80
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12003173683.0
num_examples: 81762
download_size: 11888906511
dataset_size: 12003173683.0
- config_name: subset_81
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11957319246.0
num_examples: 81762
download_size: 11842961808
dataset_size: 11957319246.0
- config_name: subset_82
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11961906864.0
num_examples: 81762
download_size: 11848437494
dataset_size: 11961906864.0
- config_name: subset_83
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11955432191.0
num_examples: 81762
download_size: 11842887032
dataset_size: 11955432191.0
- config_name: subset_84
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 1012537.0
num_examples: 8
download_size: 1014082
dataset_size: 1012537.0
- config_name: subset_85
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11999771523.0
num_examples: 81762
download_size: 11884551517
dataset_size: 11999771523.0
- config_name: subset_86
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12055273390.0
num_examples: 81762
download_size: 11939857521
dataset_size: 12055273390.0
- config_name: subset_87
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12027970582.0
num_examples: 81762
download_size: 11914449678
dataset_size: 12027970582.0
- config_name: subset_88
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12007135667.0
num_examples: 81762
download_size: 11892836400
dataset_size: 12007135667.0
- config_name: subset_89
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11987705401.0
num_examples: 81762
download_size: 11872517800
dataset_size: 11987705401.0
- config_name: subset_90
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11983514345.0
num_examples: 81762
download_size: 11869224576
dataset_size: 11983514345.0
- config_name: subset_91
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12007490987.0
num_examples: 81762
download_size: 11892713035
dataset_size: 12007490987.0
- config_name: subset_92
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12064841644.0
num_examples: 81762
download_size: 11950430751
dataset_size: 12064841644.0
- config_name: subset_93
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12048217596.0
num_examples: 81695
download_size: 11919393719
dataset_size: 12048217596.0
- config_name: subset_94
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12004431417.0
num_examples: 81695
download_size: 11877168463
dataset_size: 12004431417.0
- config_name: subset_95
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12022940507.0
num_examples: 81695
download_size: 11895831303
dataset_size: 12022940507.0
- config_name: subset_96
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11990185203.0
num_examples: 81695
download_size: 11862771545
dataset_size: 11990185203.0
- config_name: subset_97
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12066249873.0
num_examples: 81695
download_size: 11938118119
dataset_size: 12066249873.0
- config_name: subset_98
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11982787054.0
num_examples: 81979
download_size: 11868699966
dataset_size: 11982787054.0
- config_name: subset_99
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12048546668.0
num_examples: 81979
download_size: 11934517378
dataset_size: 12048546668.0
- config_name: subset_100
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11983746067.0
num_examples: 81979
download_size: 11869211119
dataset_size: 11983746067.0
- config_name: subset_101
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11975603504.0
num_examples: 81979
download_size: 11862747029
dataset_size: 11975603504.0
- config_name: subset_102
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12019909533.0
num_examples: 81979
download_size: 11905992555
dataset_size: 12019909533.0
- config_name: subset_103
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12028438504.0
num_examples: 81979
download_size: 11915108928
dataset_size: 12028438504.0
- config_name: subset_104
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12016924063.0
num_examples: 81979
download_size: 11901831094
dataset_size: 12016924063.0
- config_name: subset_106
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12032208461.0
num_examples: 81979
download_size: 11917640551
dataset_size: 12032208461.0
- config_name: subset_107
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12068731847.0
num_examples: 81979
download_size: 11954708613
dataset_size: 12068731847.0
- config_name: subset_108
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12038997422.0
num_examples: 81979
download_size: 11924000308
dataset_size: 12038997422.0
- config_name: subset_109
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12003132757.0
num_examples: 81979
download_size: 11888557920
dataset_size: 12003132757.0
- config_name: subset_110
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12020332280.0
num_examples: 81979
download_size: 11905790173
dataset_size: 12020332280.0
- config_name: subset_111
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12063662766.0
num_examples: 81979
download_size: 11949459113
dataset_size: 12063662766.0
- config_name: subset_112
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12032462213.0
num_examples: 81979
download_size: 11918361668
dataset_size: 12032462213.0
- config_name: subset_113
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12047815582.0
num_examples: 81979
download_size: 11932493558
dataset_size: 12047815582.0
- config_name: subset_114
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11983227127.0
num_examples: 81952
download_size: 11868924753
dataset_size: 11983227127.0
- config_name: subset_115
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12014562653.0
num_examples: 81952
download_size: 11899165029
dataset_size: 12014562653.0
- config_name: subset_116
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11990526863.0
num_examples: 81952
download_size: 11877264839
dataset_size: 11990526863.0
- config_name: subset_117
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12041526166.0
num_examples: 81952
download_size: 11925934917
dataset_size: 12041526166.0
- config_name: subset_118
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12023640570.0
num_examples: 81952
download_size: 11908183919
dataset_size: 12023640570.0
- config_name: subset_119
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11936278695.0
num_examples: 81952
download_size: 11821317391
dataset_size: 11936278695.0
- config_name: subset_120
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12000415655.0
num_examples: 81952
download_size: 11886804931
dataset_size: 12000415655.0
- config_name: subset_121
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 1425279.0
num_examples: 9
download_size: 1423721
dataset_size: 1425279.0
- config_name: subset_122
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12008683499.0
num_examples: 81952
download_size: 11893641175
dataset_size: 12008683499.0
- config_name: subset_123
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12027935264.0
num_examples: 81952
download_size: 11913034713
dataset_size: 12027935264.0
- config_name: subset_124
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12064544791.0
num_examples: 81952
download_size: 11950515030
dataset_size: 12064544791.0
- config_name: subset_125
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12000955279.0
num_examples: 81952
download_size: 11886598799
dataset_size: 12000955279.0
- config_name: subset_126
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11973907455.0
num_examples: 81952
download_size: 11860217861
dataset_size: 11973907455.0
- config_name: subset_127
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11986051901.0
num_examples: 81952
download_size: 11874256148
dataset_size: 11986051901.0
- config_name: subset_128
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12053574216.0
num_examples: 81952
download_size: 11939246084
dataset_size: 12053574216.0
- config_name: subset_129
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12027187873.0
num_examples: 81952
download_size: 11913333522
dataset_size: 12027187873.0
- config_name: subset_130
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11971640995.0
num_examples: 81741
download_size: 11858044214
dataset_size: 11971640995.0
- config_name: subset_131
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11958889898.0
num_examples: 81741
download_size: 11845676916
dataset_size: 11958889898.0
- config_name: subset_132
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11988251612.0
num_examples: 81741
download_size: 11874568757
dataset_size: 11988251612.0
- config_name: subset_133
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11967584023.0
num_examples: 81741
download_size: 11854988955
dataset_size: 11967584023.0
- config_name: subset_134
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12016915751.0
num_examples: 81741
download_size: 11902365351
dataset_size: 12016915751.0
- config_name: subset_135
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11964572785.0
num_examples: 81741
download_size: 11851658432
dataset_size: 11964572785.0
- config_name: subset_136
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11908099989.0
num_examples: 81741
download_size: 11795885264
dataset_size: 11908099989.0
- config_name: subset_137
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 1546156.0
num_examples: 9
download_size: 1543948
dataset_size: 1546156.0
- config_name: subset_138
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11991331753.0
num_examples: 81741
download_size: 11876525753
dataset_size: 11991331753.0
- config_name: subset_139
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12031286443.0
num_examples: 81741
download_size: 11915924175
dataset_size: 12031286443.0
- config_name: subset_140
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11916993205.0
num_examples: 81741
download_size: 11803105651
dataset_size: 11916993205.0
- config_name: subset_141
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11961506084.0
num_examples: 81741
download_size: 11847475435
dataset_size: 11961506084.0
- config_name: subset_142
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11984433864.0
num_examples: 81741
download_size: 11871103390
dataset_size: 11984433864.0
- config_name: subset_143
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11955971478.0
num_examples: 81741
download_size: 11843292567
dataset_size: 11955971478.0
- config_name: subset_144
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12006636934.0
num_examples: 81741
download_size: 11893269630
dataset_size: 12006636934.0
- config_name: subset_145
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11915913167.0
num_examples: 81741
download_size: 11803039015
dataset_size: 11915913167.0
- config_name: subset_146
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12047555951.0
num_examples: 81901
download_size: 11932278734
dataset_size: 12047555951.0
- config_name: subset_147
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11965164707.0
num_examples: 81901
download_size: 11851459743
dataset_size: 11965164707.0
- config_name: subset_148
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12033491266.0
num_examples: 81901
download_size: 11918004223
dataset_size: 12033491266.0
- config_name: subset_149
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12060383917.0
num_examples: 81901
download_size: 11945155815
dataset_size: 12060383917.0
- config_name: subset_150
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12038697018.0
num_examples: 81901
download_size: 11923803783
dataset_size: 12038697018.0
- config_name: subset_151
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12052223314.0
num_examples: 81901
download_size: 11937511709
dataset_size: 12052223314.0
- config_name: subset_152
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12028134782.0
num_examples: 81901
download_size: 11913400674
dataset_size: 12028134782.0
- config_name: subset_153
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 2787812.0
num_examples: 14
download_size: 2776851
dataset_size: 2787812.0
- config_name: subset_154
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12050964598.0
num_examples: 81901
download_size: 11937220095
dataset_size: 12050964598.0
- config_name: subset_155
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11991349766.0
num_examples: 81901
download_size: 11875844204
dataset_size: 11991349766.0
- config_name: subset_156
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12047716059.0
num_examples: 81901
download_size: 11933661170
dataset_size: 12047716059.0
- config_name: subset_157
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12061494511.0
num_examples: 81901
download_size: 11946583084
dataset_size: 12061494511.0
- config_name: subset_158
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12012854373.0
num_examples: 81901
download_size: 11898934310
dataset_size: 12012854373.0
- config_name: subset_159
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12007222147.0
num_examples: 81901
download_size: 11892759297
dataset_size: 12007222147.0
- config_name: subset_160
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12020275879.0
num_examples: 81901
download_size: 11906908956
dataset_size: 12020275879.0
- config_name: subset_161
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12053470194.0
num_examples: 81901
download_size: 11938701675
dataset_size: 12053470194.0
- config_name: subset_162
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12062555681.0
num_examples: 81963
download_size: 11946555623
dataset_size: 12062555681.0
- config_name: subset_163
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11980017806.0
num_examples: 81963
download_size: 11866897757
dataset_size: 11980017806.0
- config_name: subset_164
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12009108898.0
num_examples: 81963
download_size: 11895618431
dataset_size: 12009108898.0
- config_name: subset_165
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12064588797.0
num_examples: 81963
download_size: 11950749845
dataset_size: 12064588797.0
- config_name: subset_166
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12036686644.0
num_examples: 81963
download_size: 11921967566
dataset_size: 12036686644.0
- config_name: subset_167
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11980889346.0
num_examples: 81963
download_size: 11867179665
dataset_size: 11980889346.0
- config_name: subset_168
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12040643426.0
num_examples: 81963
download_size: 11926328924
dataset_size: 12040643426.0
- config_name: subset_169
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 1827473.0
num_examples: 13
download_size: 1816925
dataset_size: 1827473.0
- config_name: subset_170
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12023191860.0
num_examples: 81963
download_size: 11908287703
dataset_size: 12023191860.0
- config_name: subset_171
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12033055599.0
num_examples: 81963
download_size: 11918944861
dataset_size: 12033055599.0
- config_name: subset_172
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12021885567.0
num_examples: 81963
download_size: 11907385383
dataset_size: 12021885567.0
- config_name: subset_173
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12037742556.0
num_examples: 81963
download_size: 11924043502
dataset_size: 12037742556.0
- config_name: subset_174
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12009768657.0
num_examples: 81963
download_size: 11896233666
dataset_size: 12009768657.0
- config_name: subset_175
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12007325114.0
num_examples: 81963
download_size: 11892935610
dataset_size: 12007325114.0
- config_name: subset_176
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12054343520.0
num_examples: 81963
download_size: 11939165877
dataset_size: 12054343520.0
- config_name: subset_177
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 12056137434.0
num_examples: 81963
download_size: 11941110173
dataset_size: 12056137434.0
- config_name: subset_178
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11672090748.0
num_examples: 79280
download_size: 11561952817
dataset_size: 11672090748.0
- config_name: subset_179
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11628243674.0
num_examples: 79280
download_size: 11519098045
dataset_size: 11628243674.0
- config_name: subset_180
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11671922794.0
num_examples: 79280
download_size: 11562009587
dataset_size: 11671922794.0
- config_name: subset_181
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11612117435.0
num_examples: 79280
download_size: 11500768309
dataset_size: 11612117435.0
- config_name: subset_182
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11636557962.0
num_examples: 79280
download_size: 11527382825
dataset_size: 11636557962.0
- config_name: subset_183
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11564958421.0
num_examples: 79280
download_size: 11454997914
dataset_size: 11564958421.0
- config_name: subset_184
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11641437714.0
num_examples: 79280
download_size: 11531228193
dataset_size: 11641437714.0
- config_name: subset_185
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 1543746.0
num_examples: 11
download_size: 1535626
dataset_size: 1543746.0
- config_name: subset_186
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11642000166.0
num_examples: 79280
download_size: 11533563100
dataset_size: 11642000166.0
- config_name: subset_187
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11599685758.0
num_examples: 79280
download_size: 11489328462
dataset_size: 11599685758.0
- config_name: subset_188
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11635745430.0
num_examples: 79280
download_size: 11525131451
dataset_size: 11635745430.0
- config_name: subset_189
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11642377162.0
num_examples: 79280
download_size: 11531027306
dataset_size: 11642377162.0
- config_name: subset_190
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11639191967.0
num_examples: 79280
download_size: 11528586793
dataset_size: 11639191967.0
- config_name: subset_191
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11600558225.0
num_examples: 79280
download_size: 11489537551
dataset_size: 11600558225.0
- config_name: subset_192
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11648567535.0
num_examples: 79280
download_size: 11535666442
dataset_size: 11648567535.0
- config_name: subset_193
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11586691166.0
num_examples: 79280
download_size: 11477645494
dataset_size: 11586691166.0
- config_name: subset_194
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11760323556.0
num_examples: 80119
download_size: 11648131279
dataset_size: 11760323556.0
- config_name: subset_195
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11704748536.0
num_examples: 80119
download_size: 11593766537
dataset_size: 11704748536.0
- config_name: subset_196
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11751491321.0
num_examples: 80119
download_size: 11639014203
dataset_size: 11751491321.0
- config_name: subset_197
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11745664748.0
num_examples: 80119
download_size: 11634419766
dataset_size: 11745664748.0
- config_name: subset_198
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11732279517.0
num_examples: 80119
download_size: 11620907204
dataset_size: 11732279517.0
- config_name: subset_199
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11732778399.0
num_examples: 80119
download_size: 11622575608
dataset_size: 11732778399.0
- config_name: subset_200
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11777571703.0
num_examples: 80119
download_size: 11665522914
dataset_size: 11777571703.0
- config_name: subset_201
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 962696.0
num_examples: 6
download_size: 942592
dataset_size: 962696.0
- config_name: subset_202
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11814808192.0
num_examples: 80119
download_size: 11701917423
dataset_size: 11814808192.0
- config_name: subset_203
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11747918370.0
num_examples: 80119
download_size: 11636562996
dataset_size: 11747918370.0
- config_name: subset_204
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11745558470.0
num_examples: 80119
download_size: 11632789986
dataset_size: 11745558470.0
- config_name: subset_205
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11722101464.0
num_examples: 80119
download_size: 11611136014
dataset_size: 11722101464.0
- config_name: subset_206
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11717618829.0
num_examples: 80119
download_size: 11607061437
dataset_size: 11717618829.0
- config_name: subset_207
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11825075373.0
num_examples: 80119
download_size: 11712301579
dataset_size: 11825075373.0
- config_name: subset_208
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11774958249.0
num_examples: 80119
download_size: 11663907545
dataset_size: 11774958249.0
- config_name: subset_209
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11730840008.0
num_examples: 80119
download_size: 11619770280
dataset_size: 11730840008.0
- config_name: subset_210
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11803205947.0
num_examples: 80466
download_size: 11691011230
dataset_size: 11803205947.0
- config_name: subset_211
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11779770854.0
num_examples: 80466
download_size: 11667889234
dataset_size: 11779770854.0
- config_name: subset_212
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11826469528.0
num_examples: 80466
download_size: 11714161419
dataset_size: 11826469528.0
- config_name: subset_213
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11822441330.0
num_examples: 80466
download_size: 11710614473
dataset_size: 11822441330.0
- config_name: subset_214
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11811295365.0
num_examples: 80466
download_size: 11697956346
dataset_size: 11811295365.0
- config_name: subset_215
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11821414917.0
num_examples: 80466
download_size: 11708355923
dataset_size: 11821414917.0
- config_name: subset_216
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11759868029.0
num_examples: 80466
download_size: 11647442619
dataset_size: 11759868029.0
- config_name: subset_217
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 1816296.0
num_examples: 13
download_size: 1807232
dataset_size: 1816296.0
- config_name: subset_218
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11835945702.0
num_examples: 80466
download_size: 11722347751
dataset_size: 11835945702.0
- config_name: subset_219
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11769332926.0
num_examples: 80466
download_size: 11656218246
dataset_size: 11769332926.0
- config_name: subset_220
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11807021581.0
num_examples: 80466
download_size: 11693528639
dataset_size: 11807021581.0
- config_name: subset_221
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11782235045.0
num_examples: 80466
download_size: 11670254055
dataset_size: 11782235045.0
- config_name: subset_222
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11804528783.0
num_examples: 80466
download_size: 11691856544
dataset_size: 11804528783.0
- config_name: subset_223
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11805637192.0
num_examples: 80466
download_size: 11695041688
dataset_size: 11805637192.0
- config_name: subset_53
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11779553286.0
num_examples: 80466
download_size: 11667047339
dataset_size: 11779553286.0
- config_name: subset_105
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transcription/en_gpt3.5
dtype: string
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
splits:
- name: train
num_bytes: 11715069450.0
num_examples: 80466
download_size: 11603387637
dataset_size: 11715069450.0
configs:
- config_name: subset_0
data_files:
- split: train
path: subset_1.0/train-*
- config_name: subset_1
data_files:
- split: train
path: subset_1.1/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_1.10/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_1.11/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_1.14/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_1.15/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_1.2/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_1.3/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_1.4/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_1.5/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_1.6/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_1.7/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_1.8/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_1.9/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_10.0/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_10.1/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_10.10/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_10.11/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_10.12/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_10.13/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_10.14/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_10.15/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_10.2/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_10.3/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_10.4/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_10.5/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_10.6/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_10.7/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_10.8/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_10.9/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_11.0/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_11.1/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_11.10/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_11.11/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_11.12/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_11.13/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_11.14/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_11.15/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_11.2/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_11.3/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_11.4/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_11.5/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_11.6/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_11.7/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_11.8/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_11.9/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_12.0/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_12.1/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_12.10/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_12.11/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_12.12/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_12.13/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_12.14/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_12.2/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_12.3/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_12.4/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_12.5/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_12.6/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_12.7/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_12.8/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_12.9/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_13.0/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_13.1/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_13.10/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_13.11/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_13.12/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_13.13/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_13.14/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_13.2/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_13.3/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_13.4/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_13.5/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_13.6/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_13.7/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_13.8/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_13.9/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_14.0/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_14.1/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_14.10/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_14.11/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_14.12/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_14.13/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_14.14/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_14.15/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_14.2/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_14.3/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_14.4/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_14.5/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_14.6/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_14.7/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_14.8/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_14.9/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_15.10/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_15.11/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_15.12/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_15.13/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_15.14/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_2.0/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_2.1/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_2.10/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_2.11/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_2.12/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_2.13/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_2.14/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_2.2/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_2.3/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_2.4/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_2.5/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_2.6/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_2.7/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_2.8/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_2.9/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_3.0/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_3.1/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_3.10/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_3.11/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_3.12/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_3.13/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_3.14/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_3.15/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_3.2/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_3.3/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_3.4/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_3.5/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_3.6/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_3.7/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_3.8/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_3.9/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_4.0/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_4.1/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_4.10/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_4.11/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_4.12/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_4.13/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_4.14/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_4.15/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_4.2/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_4.3/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_4.4/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_4.5/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_4.6/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_4.7/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_4.8/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_4.9/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_5.0/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_5.1/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_5.10/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_5.11/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_5.12/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_5.13/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_5.14/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_5.15/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_5.2/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_5.3/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_5.4/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_5.5/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_5.6/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_5.7/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_5.8/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_5.9/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_6.0/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_6.1/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_6.10/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_6.11/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_6.12/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_6.13/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_6.14/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_6.15/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_6.2/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_6.3/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_6.4/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_6.5/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_6.6/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_6.7/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_6.8/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_6.9/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_7.0/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_7.1/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_7.10/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_7.11/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_7.12/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_7.13/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_7.14/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_7.15/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_7.2/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_7.3/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_7.4/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_7.5/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_7.6/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_7.7/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_7.8/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_7.9/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_8.0/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_8.1/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_8.10/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_8.11/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_8.12/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_8.13/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_8.14/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_8.15/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_8.2/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_8.3/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_8.4/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_8.5/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_8.6/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_8.7/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_8.8/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_8.9/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_9.0/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_9.1/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_9.10/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_9.11/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_9.12/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_9.13/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_9.14/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_9.15/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_9.2/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_9.3/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_9.4/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_9.5/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_9.6/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_9.7/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_9.8/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_9.9/train-*
---
|
criteo/CriteoPrivateAd | criteo | "2025-02-26T15:18:35Z" | 11,943 | 2 | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"license:cc-by-sa-4.0",
"size_categories:10M<n<100M",
"arxiv:2502.12103",
"arxiv:2201.13123",
"region:us",
"criteo",
"advertising"
] | [
"tabular-classification",
"tabular-regression"
] | "2025-02-18T14:35:40Z" | ---
license: cc-by-sa-4.0
size_categories:
- 10M<n<100M
task_categories:
- tabular-classification
- tabular-regression
tags:
- criteo
- advertising
---
# Dataset Documentation
## Private Bidding Optimisation {#private-conversion-optimisation}
The advertising industry lacks a common benchmark to assess the privacy
/ utility trade-off in private advertising systems. To fill this gap, we
are open-sourcing CriteoPrivateAd, the largest real-world anonymised
bidding dataset, in terms of number of features. This dataset enables
engineers and researchers to:
- assess the impact of removing cross-domain user signals,
highlighting the effects of third-party cookie deprecation;
- design and test private bidding optimisation approaches using
contextual signals and user features;
- evaluate the relevancy of answers provided by aggregation APIs for
bidding model learning.
## Summary
This dataset is released by Criteo to foster research and industrial
innovation on privacy-preserving machine learning applied to a major
advertising use-case, namely bid optimisation under user signal loss /
obfuscation.
This use-case is inspired by challenges both browser vendors and AdTech
companies are facing due to third-party cookie deprecation, such as
ensuring a viable cookie-less advertising business via a pragmatic
performance / privacy trade-off. In particular, we are expecting to see
improvements of Google Chrome Privacy Sandbox and Microsoft Ad Selection
APIs via offline benchmarks based on this dataset.
The dataset contains an anonymised log aiming to mimic production
performance of AdTech bidding engines, so that offline results based on
this dataset could be taken as ground truth to improve online
advertising performance under privacy constraints. Features are grouped
into several groups depending on their nature, envisioned privacy
constraints and availability at inference time.
Based on this dataset, the intended objective is to implement privacy
constraints (e.g. by aggregating labels or by adding differential
privacy to features and/or labels) and then learn click and conversion
(e.g. sales) prediction models.
The associated paper is available [here](https://arxiv.org/abs/2502.12103)
As a leading AdTech company that drives commerce outcomes for media
owners and marketers, Criteo is committed to evaluating proposals that
might affect the way we will perform attribution, reporting and campaign
optimisation in the future. Criteo has already participated in testing
and providing feedback on browser proposals such as the Privacy Sandbox
one; see all our [Medium articles](https://techblog.criteo.com) Back in 2021, we also
organised a public challenge aiming to assess bidding performance when
learning on aggregated data: our learnings are available [here](https://arxiv.org/abs/2201.13123).
## Dataset Description
A precise description of the dataset and each column is available in [the
companion paper](https://arxiv.org/abs/2502.12103)
This dataset represents a 100M anonymised sample of 30 days of Criteo
live data retrieved from third-party cookie traffic on Chrome. Each line corresponds to one impression (a banner)
that was displayed to a user. It is partionned by day (`day_int`) to facilitate exploration, model seeding and train/validation/test split.
For each impression, we are providing:
- campaign x publisher x (user x day) granularity with respective ids, to match Chrome Privacy Sandbox scenarios and both
display and user-level privacy.
- 4 labels (click, click leading to a landing on an advertiser
website, click leading to a visit on an advertiser website -
i.e. landing followed by one advertiser event, number of sales
attributed to the clicked display).
- more than 100 features grouped in 5 buckets with respect to their
logging and inference constraints in Protected Audience API from
Chrome Privacy Sandbox (note that these buckets are generic enough
to cover other private advertising frameworks as we are mainly
providing a split between ad campaign features, single-domain &
cross-domain user features, and contextual features) :
- Features available in the key-value server with 12-bit logging
constraint (i.e. derived from current version of modelingSignals
and standing for single-domain user features).
- Features available in the key-value server with no logging
constraint (i.e. derived from Interest Group name / renderURL).
- Features available in browser with 12-bit constraint
(i.e. cross-domain features available in generateBid).
- Features from contextual call with no logging constraint
(i.e. contextual features).
- Features not available (i.e. cross-device and cross-domain
ones).
- `day_int` enabling (1) splitting the log into training, validation
and testing sets; (2) performing relevant model seeding.
- Information about conversion delay to simulate the way Privacy Sandbox APIs are working.
- `time_between_request_timestamp_and_post_display_event` (column name
in clear): time delta (in minutes) between the request timestamp and the
click or sale event. All displays are considered starting the day of
the event at 00:00 to avoid providing complete timelines.
- We include a display order from 1 to K for display on the same day
for the same user.
The displays-per-user histograms can be deduced from event_per_user_contribution.csv. It is useful to build importance sampling ratios and user-level DP, as it is detailed in the companion paper.
## Metrics
The metrics best suited to the click and conversion estimation problems
are:
- the log-likelihood (LLH), and preferably a rescaled version named LLH-CompVN defined
as the relative log-likelihood uplift compared to the naive model
always predicting the average label in the training dataset;
- calibration, defined as the ratio between the sum of the predictions
and the sum of the validation labels. It must be close to 1 for a
bidding application;
We would like to point out that conventional classification measures
such as area under the curve (AUC) are less relevant for comparing
auction models.
The click-through rate is higher than the one encountered in real-world
advertising systems on the open internet. To design realistic bidding
applications, one must use a weighted loss for validation. We defer the
interested readers to the [associated companion paper](https://arxiv.org/abs/2502.12103) for more details
## Baselines
The Training period has been fixed to 1->25 and Validation period to 26->30. The chosen loss is the LLH-CompVN with weighting as defined above. The Sales | Display is a product of the Landed Click | Display and the Sales | Landed Click.
| Task/CTR | 0.1% | 0.5% | 1% |
|-------------------------|-------|-------|-------|
| Landed Click \| Display | 0.170 | 0.186 | 0.234 |
| Sales \| Landed Click | 0.218 | 0.218 | 0.218 |
| Sales \| Display | 0.171 | 0.187 | 0.237 |
Note that our baseline results might be difficult to achieve because of the anonymisation of the dataset.
## License
The data is released under the license. You are free to
Share and Adapt this data provided that you respect the Attribution and
ShareAlike conditions. Please read carefully the full license before
using.
## Citation
If you use the dataset in your research please cite it using the
following Bibtex excerpt:
```bibtex
@misc{sebbar2025criteoprivateadrealworldbiddingdataset,
title={CriteoPrivateAd: A Real-World Bidding Dataset to Design Private Advertising Systems},
author={Mehdi Sebbar and Corentin Odic and Mathieu Léchine and Aloïs Bissuel and
Nicolas Chrysanthos and Anthony D'Amato and Alexandre Gilotte and
Fabian Höring and Sarah Nogueira and Maxime Vono},
year={2025},
eprint={2502.12103},
archivePrefix={arXiv},
primaryClass={cs.CR},
url={https://arxiv.org/abs/2502.12103},
}
```
## Acknowledgment
We would like to thank:
- Google Chrome Privacy Sandbox team, especially Charlie Harrison,
for feedbacks on the usefulness of this dataset.
- W3C PATCG group, notably for their public data requests to foster
work on the future of attribution and reporting.
- Criteo stakeholders who took part of this dataset release: Anthony
D'Amato, Mathieu Léchine, Mehdi Sebbar, Corentin Odic, Maxime Vono,
Camille Jandot, Fatma Moalla, Nicolas Chrysanthos, Romain Lerallut,
Alexandre Gilotte, Aloïs Bissuel, Lionel Basdevant, Henry Jantet. |
huggingface/release-assets | huggingface | "2024-09-26T12:48:50Z" | 11,936 | 1 | [
"license:mit",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-09-25T10:32:15Z" | ---
license: mit
---
|
distil-whisper/librispeech_long | distil-whisper | "2023-11-02T14:22:54Z" | 11,930 | 2 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-11-02T14:22:51Z" | ---
dataset_info:
config_name: clean
features:
- name: audio
dtype: audio
splits:
- name: validation
num_bytes: 1998609.0
num_examples: 1
download_size: 1984721
dataset_size: 1998609.0
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
---
# Dataset Card for "librispeech_long"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ericphann/video-game-super-resolution | ericphann | "2025-03-14T13:36:19Z" | 11,925 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2025-03-09T02:59:54Z" | ---
license: apache-2.0
---
|
GEM/xwikis | GEM | "2023-02-22T13:05:19Z" | 11,911 | 3 | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:unknown",
"multilinguality:unknown",
"source_datasets:original",
"language:de",
"language:en",
"language:fr",
"language:cs",
"license:cc-by-sa-4.0",
"arxiv:2202.09583",
"region:us"
] | [
"summarization"
] | "2022-03-14T15:31:48Z" | ---
annotations_creators:
- found
language_creators:
- unknown
language:
- de
- en
- fr
- cs
license:
- cc-by-sa-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: xwikis
---
# Dataset Card for GEM/xwikis
## Dataset Description
- **Homepage:** https://github.com/lauhaide/clads
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/2202.09583
- **Leaderboard:** N/A
- **Point of Contact:** Laura Perez-Beltrachini
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xwikis).
### Dataset Summary
The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/xwikis')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/xwikis).
#### website
[Github](https://github.com/lauhaide/clads)
#### paper
https://arxiv.org/abs/2202.09583
#### authors
Laura Perez-Beltrachini (University of Edinburgh)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/lauhaide/clads)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://arxiv.org/abs/2202.09583
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@InProceedings{clads-emnlp,
author = "Laura Perez-Beltrachini and Mirella Lapata",
title = "Models and Datasets for Cross-Lingual Summarisation",
booktitle = "Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing ",
year = "2021",
address = "Punta Cana, Dominican Republic",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Laura Perez-Beltrachini
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`German`, `English`, `French`, `Czech`, `Chinese`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Cross-lingual and Multi-lingual single long input document abstractive summarisation.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)
### Dataset Structure
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
For each language pair and direction there exists a train/valid/test split.
The test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).
Train/valid are randomly split.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
- identification of entity salient information
- translation
- multi-linguality
- cross-lingual transfer, zero-shot, few-shot
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
ROUGE-1/2/L
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
other
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
found
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The input documents have section structure information.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by another rater
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Bilingual annotators assessed the content overlap of source document and target summaries.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`public domain`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
|
asahi417/seamless-align-enA-zhA.speaker-embedding.hubert-xl | asahi417 | "2024-06-16T12:04:50Z" | 11,899 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-12T09:01:20Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9475358331
num_examples: 1962
download_size: 9504134241
dataset_size: 9475358331
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9052265145
num_examples: 2031
download_size: 9081911906
dataset_size: 9052265145
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8818322637
num_examples: 1891
download_size: 8846394382
dataset_size: 8818322637
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8694449499
num_examples: 1885
download_size: 8722422676
dataset_size: 8694449499
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8490046178
num_examples: 1863
download_size: 8516889176
dataset_size: 8490046178
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8647106885
num_examples: 1861
download_size: 8674999999
dataset_size: 8647106885
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8534733566
num_examples: 1875
download_size: 8562882733
dataset_size: 8534733566
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8753738114
num_examples: 1871
download_size: 8781689050
dataset_size: 8753738114
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8286741426
num_examples: 1865
download_size: 8313205426
dataset_size: 8286741426
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8325399814
num_examples: 1838
download_size: 8352141658
dataset_size: 8325399814
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8681207053
num_examples: 1860
download_size: 8709094371
dataset_size: 8681207053
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8660101738
num_examples: 1866
download_size: 8687993587
dataset_size: 8660101738
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8694663339
num_examples: 1994
download_size: 8723170105
dataset_size: 8694663339
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8360228006
num_examples: 1843
download_size: 8386976872
dataset_size: 8360228006
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8615987524
num_examples: 1845
download_size: 8643384072
dataset_size: 8615987524
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8446424009
num_examples: 1844
download_size: 8472296093
dataset_size: 8446424009
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8383634114
num_examples: 1839
download_size: 8410374433
dataset_size: 8383634114
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8614573817
num_examples: 1851
download_size: 8638973146
dataset_size: 8614573817
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8556998085
num_examples: 1821
download_size: 8584222337
dataset_size: 8556998085
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8539024391
num_examples: 1837
download_size: 8566368357
dataset_size: 8539024391
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8611832701
num_examples: 1854
download_size: 8638047288
dataset_size: 8611832701
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8359784999
num_examples: 1814
download_size: 8386285327
dataset_size: 8359784999
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8444813376
num_examples: 1823
download_size: 8468814399
dataset_size: 8444813376
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8899375434
num_examples: 2034
download_size: 8927972812
dataset_size: 8899375434
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8362871341
num_examples: 1835
download_size: 8389607062
dataset_size: 8362871341
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8589859129
num_examples: 1832
download_size: 8617131569
dataset_size: 8589859129
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8464412732
num_examples: 1824
download_size: 8491036692
dataset_size: 8464412732
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8510916250
num_examples: 1800
download_size: 8534899452
dataset_size: 8510916250
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8670085496
num_examples: 1830
download_size: 8697299075
dataset_size: 8670085496
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8664922802
num_examples: 1858
download_size: 8692184470
dataset_size: 8664922802
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8867646043
num_examples: 1888
download_size: 8895593882
dataset_size: 8867646043
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8502961354
num_examples: 1833
download_size: 8530282215
dataset_size: 8502961354
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8763542534
num_examples: 1835
download_size: 8790716937
dataset_size: 8763542534
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8871775745
num_examples: 1885
download_size: 8899774981
dataset_size: 8871775745
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8866596251
num_examples: 2021
download_size: 8895083757
dataset_size: 8866596251
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8611195204
num_examples: 1828
download_size: 8638413770
dataset_size: 8611195204
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8576540366
num_examples: 1813
download_size: 8603653731
dataset_size: 8576540366
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8825468427
num_examples: 1864
download_size: 8851947976
dataset_size: 8825468427
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8643703114
num_examples: 1844
download_size: 8669882835
dataset_size: 8643703114
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8497985387
num_examples: 1826
download_size: 8524599022
dataset_size: 8497985387
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8779961281
num_examples: 1853
download_size: 8807683690
dataset_size: 8779961281
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8735451358
num_examples: 1881
download_size: 8763418066
dataset_size: 8735451358
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8496628389
num_examples: 1837
download_size: 8523294536
dataset_size: 8496628389
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8805700503
num_examples: 1869
download_size: 8833660035
dataset_size: 8805700503
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8667175042
num_examples: 1830
download_size: 8694399902
dataset_size: 8667175042
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8767553703
num_examples: 2012
download_size: 8796172224
dataset_size: 8767553703
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8619038990
num_examples: 1815
download_size: 8646191994
dataset_size: 8619038990
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8482704610
num_examples: 1814
download_size: 8509263477
dataset_size: 8482704610
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8714363668
num_examples: 1851
download_size: 8739273599
dataset_size: 8714363668
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8473945886
num_examples: 1792
download_size: 8500357397
dataset_size: 8473945886
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8690865457
num_examples: 1856
download_size: 8718776996
dataset_size: 8690865457
- config_name: subset_145
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8752834592
num_examples: 1850
download_size: 8778357285
dataset_size: 8752834592
- config_name: subset_146
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8847402860
num_examples: 1847
download_size: 8874983766
dataset_size: 8847402860
- config_name: subset_147
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8687503715
num_examples: 1851
download_size: 8715248015
dataset_size: 8687503715
- config_name: subset_148
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8692581322
num_examples: 1848
download_size: 8720249389
dataset_size: 8692581322
- config_name: subset_149
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8651457862
num_examples: 1869
download_size: 8679402957
dataset_size: 8651457862
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8728129529
num_examples: 2010
download_size: 8756714762
dataset_size: 8728129529
- config_name: subset_150
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8991506685
num_examples: 1866
download_size: 9018018483
dataset_size: 8991506685
- config_name: subset_151
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8789789887
num_examples: 1862
download_size: 8816336634
dataset_size: 8789789887
- config_name: subset_152
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8629953911
num_examples: 1825
download_size: 8657143451
dataset_size: 8629953911
- config_name: subset_153
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8914512879
num_examples: 1859
download_size: 8942145755
dataset_size: 8914512879
- config_name: subset_154
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8867684934
num_examples: 1862
download_size: 8895459177
dataset_size: 8867684934
- config_name: subset_155
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8705118023
num_examples: 1827
download_size: 8730928615
dataset_size: 8705118023
- config_name: subset_156
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8710816462
num_examples: 1834
download_size: 8737994414
dataset_size: 8710816462
- config_name: subset_157
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8927102727
num_examples: 1861
download_size: 8954718940
dataset_size: 8927102727
- config_name: subset_158
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8588989210
num_examples: 1783
download_size: 8615987975
dataset_size: 8588989210
- config_name: subset_159
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7922920167
num_examples: 1654
download_size: 7945605098
dataset_size: 7922920167
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8310633218
num_examples: 1974
download_size: 8337943866
dataset_size: 8310633218
- config_name: subset_160
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8828202715
num_examples: 1841
download_size: 8854872651
dataset_size: 8828202715
- config_name: subset_161
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8682588671
num_examples: 1838
download_size: 8709901755
dataset_size: 8682588671
- config_name: subset_162
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8758680851
num_examples: 1847
download_size: 8786270340
dataset_size: 8758680851
- config_name: subset_163
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8869078668
num_examples: 1863
download_size: 8893729843
dataset_size: 8869078668
- config_name: subset_164
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8905436759
num_examples: 1840
download_size: 8932703540
dataset_size: 8905436759
- config_name: subset_165
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8720596618
num_examples: 1815
download_size: 8747653037
dataset_size: 8720596618
- config_name: subset_166
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8828168886
num_examples: 1865
download_size: 8856037284
dataset_size: 8828168886
- config_name: subset_167
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8746829806
num_examples: 1794
download_size: 8773777521
dataset_size: 8746829806
- config_name: subset_168
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9007693805
num_examples: 1871
download_size: 9036081249
dataset_size: 9007693805
- config_name: subset_169
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8856887265
num_examples: 1845
download_size: 8884385250
dataset_size: 8856887265
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8355660396
num_examples: 2005
download_size: 8382470836
dataset_size: 8355660396
- config_name: subset_170
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9022215695
num_examples: 1877
download_size: 9050569892
dataset_size: 9022215695
- config_name: subset_171
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8961008358
num_examples: 1863
download_size: 8988688450
dataset_size: 8961008358
- config_name: subset_172
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9003346212
num_examples: 1841
download_size: 9031649491
dataset_size: 9003346212
- config_name: subset_173
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8769456476
num_examples: 1846
download_size: 8796914362
dataset_size: 8769456476
- config_name: subset_174
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8786262179
num_examples: 1833
download_size: 8809674639
dataset_size: 8786262179
- config_name: subset_175
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8828852577
num_examples: 1830
download_size: 8855927492
dataset_size: 8828852577
- config_name: subset_176
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8744750709
num_examples: 1809
download_size: 8770082253
dataset_size: 8744750709
- config_name: subset_177
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8724897822
num_examples: 1841
download_size: 8752309117
dataset_size: 8724897822
- config_name: subset_178
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9036485977
num_examples: 1876
download_size: 9061131354
dataset_size: 9036485977
- config_name: subset_179
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9012776164
num_examples: 1863
download_size: 9041223308
dataset_size: 9012776164
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8135435228
num_examples: 1933
download_size: 8162742157
dataset_size: 8135435228
- config_name: subset_180
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9112317856
num_examples: 1898
download_size: 9140872991
dataset_size: 9112317856
- config_name: subset_181
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8977973491
num_examples: 1857
download_size: 9005672498
dataset_size: 8977973491
- config_name: subset_182
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9029541417
num_examples: 1857
download_size: 9055403492
dataset_size: 9029541417
- config_name: subset_183
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8937958358
num_examples: 1835
download_size: 8963926731
dataset_size: 8937958358
- config_name: subset_184
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8892712298
num_examples: 1821
download_size: 8917174851
dataset_size: 8892712298
- config_name: subset_185
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8794741421
num_examples: 1821
download_size: 8821786783
dataset_size: 8794741421
- config_name: subset_186
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8896567690
num_examples: 1847
download_size: 8924064224
dataset_size: 8896567690
- config_name: subset_187
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8819419197
num_examples: 1828
download_size: 8846417894
dataset_size: 8819419197
- config_name: subset_188
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8788158341
num_examples: 1837
download_size: 8813559843
dataset_size: 8788158341
- config_name: subset_189
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9061567837
num_examples: 1875
download_size: 9089485222
dataset_size: 9061567837
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8378675532
num_examples: 1983
download_size: 8403564755
dataset_size: 8378675532
- config_name: subset_190
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8924683797
num_examples: 1856
download_size: 8952409416
dataset_size: 8924683797
- config_name: subset_191
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9133946536
num_examples: 1839
download_size: 9162193046
dataset_size: 9133946536
- config_name: subset_192
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9101174444
num_examples: 1851
download_size: 9129419826
dataset_size: 9101174444
- config_name: subset_193
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8884454974
num_examples: 1852
download_size: 8912112700
dataset_size: 8884454974
- config_name: subset_194
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8857398226
num_examples: 1828
download_size: 8884439171
dataset_size: 8857398226
- config_name: subset_195
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8922832984
num_examples: 1843
download_size: 8950116208
dataset_size: 8922832984
- config_name: subset_196
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9061039389
num_examples: 1866
download_size: 9089473215
dataset_size: 9061039389
- config_name: subset_197
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8970053913
num_examples: 1883
download_size: 8997972157
dataset_size: 8970053913
- config_name: subset_198
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9000493470
num_examples: 1857
download_size: 9028827236
dataset_size: 9000493470
- config_name: subset_199
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8933568246
num_examples: 1849
download_size: 8961019935
dataset_size: 8933568246
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9885890256
num_examples: 2052
download_size: 9912654391
dataset_size: 9885890256
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8315007920
num_examples: 1959
download_size: 8342174701
dataset_size: 8315007920
- config_name: subset_200
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9024971513
num_examples: 1866
download_size: 9053340556
dataset_size: 9024971513
- config_name: subset_201
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8677502163
num_examples: 1795
download_size: 8704481674
dataset_size: 8677502163
- config_name: subset_202
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9337885526
num_examples: 1885
download_size: 9366246045
dataset_size: 9337885526
- config_name: subset_203
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9031663525
num_examples: 1866
download_size: 9060044558
dataset_size: 9031663525
- config_name: subset_204
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8897418049
num_examples: 1844
download_size: 8924830713
dataset_size: 8897418049
- config_name: subset_205
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9093683290
num_examples: 1853
download_size: 9121998582
dataset_size: 9093683290
- config_name: subset_206
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8978602151
num_examples: 1838
download_size: 9005756379
dataset_size: 8978602151
- config_name: subset_207
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8831165392
num_examples: 1853
download_size: 8857734965
dataset_size: 8831165392
- config_name: subset_208
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9000418573
num_examples: 1818
download_size: 9028589709
dataset_size: 9000418573
- config_name: subset_209
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8723511270
num_examples: 1804
download_size: 8750549225
dataset_size: 8723511270
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8059056980
num_examples: 1934
download_size: 8085539765
dataset_size: 8059056980
- config_name: subset_210
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9096025175
num_examples: 1855
download_size: 9123260398
dataset_size: 9096025175
- config_name: subset_211
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9031934072
num_examples: 1854
download_size: 9060243751
dataset_size: 9031934072
- config_name: subset_212
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9061133197
num_examples: 1823
download_size: 9089334966
dataset_size: 9061133197
- config_name: subset_213
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8953498392
num_examples: 1822
download_size: 8980485266
dataset_size: 8953498392
- config_name: subset_214
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8877870669
num_examples: 1853
download_size: 8902908861
dataset_size: 8877870669
- config_name: subset_215
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8949187009
num_examples: 1859
download_size: 8976806725
dataset_size: 8949187009
- config_name: subset_216
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8923212072
num_examples: 1842
download_size: 8950552437
dataset_size: 8923212072
- config_name: subset_217
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8947443772
num_examples: 1846
download_size: 8973363421
dataset_size: 8947443772
- config_name: subset_218
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9058685594
num_examples: 1843
download_size: 9084894473
dataset_size: 9058685594
- config_name: subset_219
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8999800092
num_examples: 1837
download_size: 9026891284
dataset_size: 8999800092
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8047911069
num_examples: 1919
download_size: 8075225725
dataset_size: 8047911069
- config_name: subset_220
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8997535084
num_examples: 1835
download_size: 9024583841
dataset_size: 8997535084
- config_name: subset_221
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8882243874
num_examples: 1820
download_size: 8909209779
dataset_size: 8882243874
- config_name: subset_222
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8784489070
num_examples: 1830
download_size: 8811608641
dataset_size: 8784489070
- config_name: subset_223
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8778902707
num_examples: 1815
download_size: 8805944777
dataset_size: 8778902707
- config_name: subset_224
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9180771480
num_examples: 1861
download_size: 9208994724
dataset_size: 9180771480
- config_name: subset_225
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8833747038
num_examples: 1842
download_size: 8861140849
dataset_size: 8833747038
- config_name: subset_226
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9233565355
num_examples: 1877
download_size: 9261886165
dataset_size: 9233565355
- config_name: subset_227
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9147385655
num_examples: 1825
download_size: 9175485832
dataset_size: 9147385655
- config_name: subset_228
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8984555930
num_examples: 1834
download_size: 9011587198
dataset_size: 8984555930
- config_name: subset_229
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8961786784
num_examples: 1795
download_size: 8988632263
dataset_size: 8961786784
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8236420835
num_examples: 1955
download_size: 8263755518
dataset_size: 8236420835
- config_name: subset_230
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9301913298
num_examples: 1864
download_size: 9330171662
dataset_size: 9301913298
- config_name: subset_231
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9210091627
num_examples: 1869
download_size: 9238436898
dataset_size: 9210091627
- config_name: subset_232
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9079365925
num_examples: 1842
download_size: 9104447099
dataset_size: 9079365925
- config_name: subset_233
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9009598774
num_examples: 1857
download_size: 9037982826
dataset_size: 9009598774
- config_name: subset_234
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9249316772
num_examples: 1880
download_size: 9277722472
dataset_size: 9249316772
- config_name: subset_235
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9101031297
num_examples: 1855
download_size: 9129341601
dataset_size: 9101031297
- config_name: subset_236
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9080914144
num_examples: 1851
download_size: 9109239494
dataset_size: 9080914144
- config_name: subset_237
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9113040428
num_examples: 1836
download_size: 9141291712
dataset_size: 9113040428
- config_name: subset_238
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9124079706
num_examples: 1863
download_size: 9149049227
dataset_size: 9124079706
- config_name: subset_239
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9134276208
num_examples: 1860
download_size: 9162644068
dataset_size: 9134276208
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8046101702
num_examples: 1912
download_size: 8073175342
dataset_size: 8046101702
- config_name: subset_240
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8863171418
num_examples: 1809
download_size: 8890104274
dataset_size: 8863171418
- config_name: subset_241
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9186957446
num_examples: 1877
download_size: 9214584579
dataset_size: 9186957446
- config_name: subset_242
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8941914175
num_examples: 1837
download_size: 8969019658
dataset_size: 8941914175
- config_name: subset_243
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9134694848
num_examples: 1841
download_size: 9162697243
dataset_size: 9134694848
- config_name: subset_244
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8830142400
num_examples: 1812
download_size: 8857167423
dataset_size: 8830142400
- config_name: subset_245
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9136692180
num_examples: 1829
download_size: 9164415517
dataset_size: 9136692180
- config_name: subset_246
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9059469118
num_examples: 1838
download_size: 9087731852
dataset_size: 9059469118
- config_name: subset_247
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9086879391
num_examples: 1862
download_size: 9115255349
dataset_size: 9086879391
- config_name: subset_248
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8777108971
num_examples: 1812
download_size: 8801615968
dataset_size: 8777108971
- config_name: subset_249
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9098350313
num_examples: 1841
download_size: 9126616632
dataset_size: 9098350313
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8206120179
num_examples: 1925
download_size: 8233315177
dataset_size: 8206120179
- config_name: subset_250
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9009281033
num_examples: 1856
download_size: 9037660742
dataset_size: 9009281033
- config_name: subset_251
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9288456165
num_examples: 1845
download_size: 9314169300
dataset_size: 9288456165
- config_name: subset_252
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9183280440
num_examples: 1839
download_size: 9211452445
dataset_size: 9183280440
- config_name: subset_253
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9312642478
num_examples: 1858
download_size: 9339443982
dataset_size: 9312642478
- config_name: subset_254
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9018219671
num_examples: 1822
download_size: 9045235110
dataset_size: 9018219671
- config_name: subset_255
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9116066518
num_examples: 1810
download_size: 9143281437
dataset_size: 9116066518
- config_name: subset_256
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8901098880
num_examples: 1804
download_size: 8928038494
dataset_size: 8901098880
- config_name: subset_257
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9345891067
num_examples: 1845
download_size: 9374049931
dataset_size: 9345891067
- config_name: subset_258
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7625098216
num_examples: 1532
download_size: 7648779700
dataset_size: 7625098216
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7563957517
num_examples: 1832
download_size: 7589771684
dataset_size: 7563957517
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7737304836
num_examples: 1862
download_size: 7760806380
dataset_size: 7737304836
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7617871992
num_examples: 1829
download_size: 7643682046
dataset_size: 7617871992
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7547299079
num_examples: 1828
download_size: 7573105243
dataset_size: 7547299079
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10008709848
num_examples: 2081
download_size: 10040007437
dataset_size: 10008709848
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7562310218
num_examples: 1801
download_size: 7588015184
dataset_size: 7562310218
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7725599958
num_examples: 1904
download_size: 7748467878
dataset_size: 7725599958
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7894866198
num_examples: 1904
download_size: 7920986831
dataset_size: 7894866198
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7780624170
num_examples: 1874
download_size: 7806496423
dataset_size: 7780624170
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8070626776
num_examples: 1932
download_size: 8097985950
dataset_size: 8070626776
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8083198658
num_examples: 1902
download_size: 8110387472
dataset_size: 8083198658
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7877529449
num_examples: 1877
download_size: 7900919627
dataset_size: 7877529449
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8072578511
num_examples: 1862
download_size: 8099603763
dataset_size: 8072578511
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8039350633
num_examples: 1878
download_size: 8066435093
dataset_size: 8039350633
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8020607069
num_examples: 1879
download_size: 8047778935
dataset_size: 8020607069
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9917181650
num_examples: 2102
download_size: 9948144339
dataset_size: 9917181650
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8200391070
num_examples: 1919
download_size: 8227683645
dataset_size: 8200391070
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7784150803
num_examples: 1828
download_size: 7809784931
dataset_size: 7784150803
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8078503611
num_examples: 1884
download_size: 8105586321
dataset_size: 8078503611
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8050407873
num_examples: 1874
download_size: 8077564634
dataset_size: 8050407873
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8156460925
num_examples: 1894
download_size: 8182952732
dataset_size: 8156460925
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8094791543
num_examples: 1869
download_size: 8121682748
dataset_size: 8094791543
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8227782331
num_examples: 1899
download_size: 8254914752
dataset_size: 8227782331
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8399555319
num_examples: 1913
download_size: 8425124341
dataset_size: 8399555319
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8196630239
num_examples: 1922
download_size: 8223972526
dataset_size: 8196630239
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8148508744
num_examples: 1897
download_size: 8175064674
dataset_size: 8148508744
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9645148979
num_examples: 2045
download_size: 9674962824
dataset_size: 9645148979
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8136630247
num_examples: 1884
download_size: 8163785523
dataset_size: 8136630247
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8138995219
num_examples: 1918
download_size: 8164731522
dataset_size: 8138995219
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8128749190
num_examples: 1880
download_size: 8155875709
dataset_size: 8128749190
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8379662630
num_examples: 1897
download_size: 8403530501
dataset_size: 8379662630
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8299902769
num_examples: 1901
download_size: 8327037425
dataset_size: 8299902769
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8300788357
num_examples: 1890
download_size: 8327851660
dataset_size: 8300788357
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7926236132
num_examples: 1851
download_size: 7952036619
dataset_size: 7926236132
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8327915963
num_examples: 1904
download_size: 8354747293
dataset_size: 8327915963
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8043744745
num_examples: 1850
download_size: 8070724090
dataset_size: 8043744745
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8250910100
num_examples: 1875
download_size: 8277929180
dataset_size: 8250910100
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9670398598
num_examples: 2090
download_size: 9700785792
dataset_size: 9670398598
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8122682492
num_examples: 1881
download_size: 8149812038
dataset_size: 8122682492
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8034198660
num_examples: 1849
download_size: 8061180755
dataset_size: 8034198660
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8440786663
num_examples: 1895
download_size: 8467083662
dataset_size: 8440786663
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8263971045
num_examples: 1874
download_size: 8287768696
dataset_size: 8263971045
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8235784601
num_examples: 1882
download_size: 8262440479
dataset_size: 8235784601
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8250950769
num_examples: 1871
download_size: 8274093255
dataset_size: 8250950769
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8187524131
num_examples: 1850
download_size: 8214359208
dataset_size: 8187524131
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8249533465
num_examples: 1896
download_size: 8276711814
dataset_size: 8249533465
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8205821935
num_examples: 1850
download_size: 8232721572
dataset_size: 8205821935
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8276702571
num_examples: 1854
download_size: 8303511635
dataset_size: 8276702571
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9674633181
num_examples: 2116
download_size: 9705630355
dataset_size: 9674633181
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8235710340
num_examples: 1851
download_size: 8262621010
dataset_size: 8235710340
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8360568918
num_examples: 1876
download_size: 8387527148
dataset_size: 8360568918
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8329253276
num_examples: 1841
download_size: 8356040243
dataset_size: 8329253276
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8347815740
num_examples: 1882
download_size: 8371933044
dataset_size: 8347815740
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8333728458
num_examples: 1865
download_size: 8360610219
dataset_size: 8333728458
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8206185586
num_examples: 1848
download_size: 8233126670
dataset_size: 8206185586
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8502817710
num_examples: 1894
download_size: 8531044038
dataset_size: 8502817710
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8450151693
num_examples: 1875
download_size: 8475237695
dataset_size: 8450151693
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8583037114
num_examples: 1933
download_size: 8609449717
dataset_size: 8583037114
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8502147032
num_examples: 1900
download_size: 8528838885
dataset_size: 8502147032
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9476143031
num_examples: 2095
download_size: 9506090058
dataset_size: 9476143031
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8329261875
num_examples: 1821
download_size: 8355943592
dataset_size: 8329261875
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8446452024
num_examples: 1873
download_size: 8473368804
dataset_size: 8446452024
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8528681053
num_examples: 1862
download_size: 8556632456
dataset_size: 8528681053
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8324966920
num_examples: 1870
download_size: 8351960546
dataset_size: 8324966920
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8457573712
num_examples: 1842
download_size: 8484245612
dataset_size: 8457573712
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8469786488
num_examples: 1870
download_size: 8496563613
dataset_size: 8469786488
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8227059949
num_examples: 1838
download_size: 8251007970
dataset_size: 8227059949
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8346178744
num_examples: 1852
download_size: 8372995748
dataset_size: 8346178744
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8326524328
num_examples: 1867
download_size: 8353496252
dataset_size: 8326524328
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8672330637
num_examples: 1903
download_size: 8700528508
dataset_size: 8672330637
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9312096400
num_examples: 2073
download_size: 9341833633
dataset_size: 9312096400
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8341293421
num_examples: 1826
download_size: 8368026531
dataset_size: 8341293421
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8513518345
num_examples: 1852
download_size: 8541408866
dataset_size: 8513518345
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8325303530
num_examples: 1852
download_size: 8352157334
dataset_size: 8325303530
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8375701504
num_examples: 1830
download_size: 8399144019
dataset_size: 8375701504
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8377757657
num_examples: 1848
download_size: 8404549010
dataset_size: 8377757657
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8561853435
num_examples: 1857
download_size: 8588866050
dataset_size: 8561853435
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8607602812
num_examples: 1885
download_size: 8635735064
dataset_size: 8607602812
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8597758267
num_examples: 1869
download_size: 8622651077
dataset_size: 8597758267
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8371655940
num_examples: 1860
download_size: 8398411289
dataset_size: 8371655940
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9003798033
num_examples: 1915
download_size: 9032455176
dataset_size: 9003798033
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_150/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_151/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_152/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_153/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_154/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_155/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_156/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_157/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_158/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_159/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_160/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_161/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_162/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_163/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_164/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_165/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_166/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_167/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_168/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_169/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_170/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_171/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_172/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_173/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_174/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_175/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_176/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_177/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_178/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_179/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_180/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_181/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_182/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_183/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_184/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_185/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_186/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_187/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_188/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_189/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_190/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_191/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_192/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_193/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_194/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_195/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_196/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_197/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_198/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_199/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_200/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_201/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_202/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_203/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_204/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_205/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_206/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_207/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_208/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_209/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_210/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_211/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_212/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_213/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_214/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_215/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_216/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_217/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_218/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_219/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_220/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_221/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_222/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_223/train-*
- config_name: subset_224
data_files:
- split: train
path: subset_224/train-*
- config_name: subset_225
data_files:
- split: train
path: subset_225/train-*
- config_name: subset_226
data_files:
- split: train
path: subset_226/train-*
- config_name: subset_227
data_files:
- split: train
path: subset_227/train-*
- config_name: subset_228
data_files:
- split: train
path: subset_228/train-*
- config_name: subset_229
data_files:
- split: train
path: subset_229/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_230
data_files:
- split: train
path: subset_230/train-*
- config_name: subset_231
data_files:
- split: train
path: subset_231/train-*
- config_name: subset_232
data_files:
- split: train
path: subset_232/train-*
- config_name: subset_233
data_files:
- split: train
path: subset_233/train-*
- config_name: subset_234
data_files:
- split: train
path: subset_234/train-*
- config_name: subset_235
data_files:
- split: train
path: subset_235/train-*
- config_name: subset_236
data_files:
- split: train
path: subset_236/train-*
- config_name: subset_237
data_files:
- split: train
path: subset_237/train-*
- config_name: subset_238
data_files:
- split: train
path: subset_238/train-*
- config_name: subset_239
data_files:
- split: train
path: subset_239/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_240
data_files:
- split: train
path: subset_240/train-*
- config_name: subset_241
data_files:
- split: train
path: subset_241/train-*
- config_name: subset_242
data_files:
- split: train
path: subset_242/train-*
- config_name: subset_243
data_files:
- split: train
path: subset_243/train-*
- config_name: subset_244
data_files:
- split: train
path: subset_244/train-*
- config_name: subset_245
data_files:
- split: train
path: subset_245/train-*
- config_name: subset_246
data_files:
- split: train
path: subset_246/train-*
- config_name: subset_247
data_files:
- split: train
path: subset_247/train-*
- config_name: subset_248
data_files:
- split: train
path: subset_248/train-*
- config_name: subset_249
data_files:
- split: train
path: subset_249/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_250
data_files:
- split: train
path: subset_250/train-*
- config_name: subset_251
data_files:
- split: train
path: subset_251/train-*
- config_name: subset_252
data_files:
- split: train
path: subset_252/train-*
- config_name: subset_253
data_files:
- split: train
path: subset_253/train-*
- config_name: subset_254
data_files:
- split: train
path: subset_254/train-*
- config_name: subset_255
data_files:
- split: train
path: subset_255/train-*
- config_name: subset_256
data_files:
- split: train
path: subset_256/train-*
- config_name: subset_257
data_files:
- split: train
path: subset_257/train-*
- config_name: subset_258
data_files:
- split: train
path: subset_258/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
vikhyatk/docmatix-single | vikhyatk | "2024-07-19T02:31:20Z" | 11,890 | 6 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-18T23:35:08Z" | ---
dataset_info:
features:
- name: images
sequence: image
- name: texts
list:
- name: user
dtype: string
- name: assistant
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 244951255658.16818
num_examples: 565009
download_size: 145422811605
dataset_size: 244951255658.16818
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
[Docmatix](https://huggingface.co/datasets/HuggingFaceM4/Docmatix), but with multi-image samples filtered out. |
MMInstruction/M3IT | MMInstruction | "2023-11-24T08:23:25Z" | 11,887 | 125 | [
"task_categories:image-to-text",
"task_categories:image-classification",
"language:en",
"language:zh",
"license:other",
"size_categories:1M<n<10M",
"region:us"
] | [
"image-to-text",
"image-classification"
] | "2023-05-04T01:43:31Z" | ---
license: other
task_categories:
- image-to-text
- image-classification
size_categories:
- 1M<n<10M
language:
- en
- zh
---
# Dataset Card for M3IT
Project Page: [M3IT](https://m3-it.github.io/)
## Dataset Description
- **Homepage: https://huggingface.co/datasets/MMInstruction/M3IT**
- **Repository: https://huggingface.co/datasets/MMInstruction/M3IT**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Languages
English and Chinese. 80 translated version can be found at [M3IT-80](https://huggingface.co/datasets/MMInstruction/M3IT-80).
## Dataset Statistics
Our dataset compiles diverse tasks of classical vision-language tasks, including captioning,
visual question answering~(VQA), visual conditioned generation, reasoning and classification.
### Instruction Statistics
| Task | #Instructions |
|---------------------------|---------------|
| Image Captioning | 52 |
| Classification | 113 |
| Visual Question Answering | 95 |
| Knowledgeable Visual QA | 40 |
| Reasoning | 60 |
| Generation | 40 |
| Total | 400 |
### Task Statistics
| Task | Description | #Train | #Val | #Test |
|---------------------------|-----------------------------------------------------------------|---------|---------|---------|
| Image Captioning | Given an image, write a description for the image. | 679,087 | 41,462 | 27,499 |
| Classification | Given an image, classify the image into pre-defined categories. | 238,303 | 100,069 | 21,206 |
| Visual Question Answering | Given an image, answer a question relevant to the image. | 177,633 | 46,314 | 10,828 |
| Knowledgeable Visual QA | Given an image, answer the question requires outside knowledge. | 39,981 | 11,682 | 5,477 |
| Reasoning | Given an image, conduct reasoning over the images. | 99,372 | 11,500 | 10,000 |
| Generation | Given an image, make compositions with certain requirements. | 145,000 | 11,315 | 17,350 |
| Chinese | CAP, CLS, VQA, and GEN tasks in Chinese. | 192,076 | 77,306 | 4,100 |
| Video | CAP, CLS, and VQA tasks on video-language datasets. | 20,868 | 7,542 | 9,294 |
| Multi-lingual | Translated tasks in 80 languages | 0 | 240,000 | 184,000 |
### Detailed Dataset Statistics
| Task | Dataset | #Train | #Val | #Test |
|---------------------------|------------------------------|---------|--------|--------|
| Image Captioning | `coco` | 566,747 | 25,010 | 25,010 |
| | `textcap` | 97,765 | 13,965 | 0 |
| | `image-paragraph-captioning` | 14,575 | 2,487 | 2,489 |
| Classification | `coco-goi` | 30,000 | 2,000 | 0 |
| | `coco-text` | 118,312 | 27,550 | 0 |
| | `imagenet` | 30,000 | 50,000 | 0 |
| | `coco-itm` | 30,000 | 5,000 | 5,000 |
| | `snli-ve` | 20,000 | 14,339 | 14,740 |
| | `mocheg` | 4,991 | 180 | 466 |
| | `iqa` | 5,000 | 1,000 | 1,000 |
| Visual Question Answering | `vqa-v2` | 30,000 | 30,000 | 0 |
| | `shapes` | 13,568 | 1,024 | 1,024 |
| | `docvqa` | 39,463 | 5,349 | 0 |
| | `ocr-vqa` | 11,414 | 4,940 | 0 |
| | `st-vqa` | 26,074 | 0 | 4,070 |
| | `text-vqa` | 27,113 | 0 | 5,734 |
| | `gqa` | 30,001 | 5,001 | 0 |
| Knowledgeable Visual QA | `okvqa` | 9,009 | 5,046 | 0 |
| | `a-okvqa` | 17,056 | 1,145 | 0 |
| | `science-qa` | 12,726 | 4,241 | 4,241 |
| | `viquae` | 1,190 | 1,250 | 1,236 |
| Reasoning | `clevr` | 30,000 | 2,000 | 0 |
| | `nlvr` | 29,372 | 2,000 | 0 |
| | `vcr` | 25,000 | 5,000 | 5,000 |
| | `visual-mrc` | 15,000 | 2,500 | 5,000 |
| | `winoground` | 0 | 0 | 800 |
| Generation | `vist` | 5,000 | 4,315 | 4,350 |
| | `visual-dialog` | 50,000 | 1,000 | 1,000 |
| | `multi30k` | 90,000 | 6,000 | 12,000 |
| Chinese | `fm-iqa` | 164,735 | 75,206 | 0 |
| | `coco-cn` | 18,341 | 1,000 | 1,000 |
| | `flickr8k-cn` | 6,000 | 1,000 | 1,000 |
| | `chinese-food` | 0 | 0 | 1,100 |
| | `mmchat` | 3,000 | 1,000 | 1,000 |
| Video | `ss` | 2,000 | 2,000 | 2,000 |
| | `ivqa` | 5,994 | 2,000 | 2,000 |
| | `msvd-qa` | 1,161 | 245 | 504 |
| | `activitynet-qa` | 3,200 | 1,800 | 800 |
| | `msrvtt` | 6,513 | 497 | 2,990 |
| | `msrvtt-qa` | 2,000 | 1,000 | 1,000 |
## Dataset Structure
### HuggingFace Login (Optional)
```python
# OR run huggingface-cli login
from huggingface_hub import login
hf_token = "hf_xxx" # TODO: set a valid HuggingFace access token for loading datasets/models
login(token=hf_token)
```
### Data Loading
```python
from datasets import load_dataset
ds_name = "coco" # change the dataset name here
dataset = load_dataset("MMInstruction/M3IT", ds_name)
```
### Data Splits
```python
from datasets import load_dataset
ds_name = "coco" # change the dataset name here
dataset = load_dataset("MMInstruction/M3IT", ds_name)
train_set = dataset["train"]
validation_set = dataset["validation"]
test_set = dataset["test"]
```
### Data Instances
```python
from datasets import load_dataset
from io import BytesIO
from base64 import b64decode
from PIL import Image
ds_name = "coco" # change the dataset name here
dataset = load_dataset("MMInstruction/M3IT", ds_name)
train_set = dataset["train"]
for train_instance in train_set:
instruction = train_instance["instruction"] # str
inputs = train_instance["inputs"] # str
outputs = train_instance["outputs"] # str
image_base64_str_list = train_instance["image_base64_str"] # str (base64)
image_0 = Image.open(BytesIO(b64decode(image_base64_str_list[0])))
```
### Data Fields
```python
import datasets
features = datasets.Features(
{
"instruction": datasets.Value("string"),
"inputs": datasets.Value("string"),
"image_base64_str": [datasets.Value("string")],
"outputs": datasets.Value("string"),
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
| Task | Dataset [Citation] | Source |
|---------------------------|----------------------------------|------------------------------------------------------------------------------------|
| Image Captioning | `coco` [1] | [Source](https://cocodataset.org/#home) |
| | `textcap` [2] | [Source](https://textvqa.org/textcaps/) |
| | `image-paragraph-captioning` [3] | [Source](https://cs.stanford.edu/people/ranjaykrishna/im2p/index.html) |
| Classification | `coco-goi` [1] | [Source](https://cocodataset.org/#home) |
| | `coco-text` [4] | [Source](https://bgshih.github.io/cocotext/) |
| | `imagenet` [5] | [Source](https://www.image-net.org/) |
| | `coco-itm` [1] | [Source](https://cocodataset.org/#home) |
| | `snli-ve` [6] | [Source](https://github.com/necla-ml/SNLI-VE) |
| | `mocheg` [7] | [Source](https://github.com/VT-NLP/Mocheg) |
| | `iqa` [8] | [Source](https://github.com/icbcbicc/IQA-Dataset) |
| Visual Question Answering | `vqa-v2` [9] | [Source](https://visualqa.org/) |
| | `shapes` [10] | [Source](https://github.com/ronghanghu/n2nmn) |
| | `docvqa` [11] | [Source](https://www.docvqa.org/) |
| | `ocr-vqa` [12] | [Source](https://ocr-vqa.github.io/) |
| | `st-vqa` [13] | [Source](https://rrc.cvc.uab.es/?ch=11) |
| | `text-vqa` [14] | [Source](https://textvqa.org/) |
| | `gqa` [15] | [Source](https://cs.stanford.edu/people/dorarad/gqa/about.html) |
| Knowledgeable Visual QA | `okvqa` [16] | [Source](https://okvqa.allenai.org/) |
| | `a-okvqa` [17] | [Source](https://allenai.org/project/a-okvqa/home) |
| | `science-qa` [18] | [Source](https://scienceqa.github.io/) |
| | `viquae` [19] | [Source](https://github.com/PaulLerner/ViQuAE) |
| Reasoning | `clevr` [20] | [Source](https://cs.stanford.edu/people/jcjohns/clevr/) |
| | `nlvr` [21] | [Source](https://lil.nlp.cornell.edu/nlvr/) |
| | `vcr` [22] | [Source](https://visualcommonsense.com/) |
| | `visual-mrc` [23] | [Source](https://github.com/nttmdlab-nlp/VisualMRC) |
| | `winoground` [24] | [Source](https://huggingface.co/datasets/facebook/winoground) |
| Generation | `vist` [25] | [Source](https://visionandlanguage.net/VIST/) |
| | `visual-dialog` [26] | [Source](https://visualdialog.org/) |
| | `multi30k` [27] | [Source](https://github.com/multi30k/dataset) |
| Chinese | `fm-iqa` [28] | [Source](https://paperswithcode.com/dataset/fm-iqa) |
| | `coco-cn` [29] | [Source](https://github.com/li-xirong/coco-cn) |
| | `flickr8k-cn` [30] | [Source](https://github.com/li-xirong/flickr8kcn) |
| | `chinese-food` [31] | [Source](https://sites.google.com/view/chinesefoodnet) |
| | `mmchat` [32] | [Source](https://github.com/silverriver/MMChat) |
| Video | `ss` [33] | [Source](https://developer.qualcomm.com/software/ai-datasets/something-something) |
| | `ivqa` [34] | [Source](https://antoyang.github.io/just-ask.html) |
| | `msvd-qa` [35] | [Source](https://paperswithcode.com/dataset/msvd) |
| | `activitynet-qa` [36] | [Source](https://github.com/MILVLG/activitynet-qa) |
| | `msrvtt` [35] | [Source](https://paperswithcode.com/dataset/msr-vtt) |
| | `msrvtt-qa` [37] | [Source](https://paperswithcode.com/sota/visual-question-answering-on-msrvtt-qa-1) |
### Annotations
#### Annotation process
To build high-quality multimodal instruction datasets,
we rewrite various datasets into multimodal-to-text dialog format.
The annotation process includes four steps:
- (1) **Stage I: Instruction Writing**: writing instructions for each task;
- (2) **Stage II: Data Format Unification**: structuring images and texts into a unified schema;
- (3) **Stage III: Quality Check**: checking the overall dataset quality;
- (4) **Stage IV: Key Datasets Translation**: building multilingual sets.
#### Who are the annotators?
Eight authors of this work are employed as human annotators,
each of whom is a graduate student familiar with relevant literature.
## Additional Information
### Licensing Information
The content of original dataset follows their original license.
We suggest that for the task with Unknown/Custom license, the user can check the original project or contact the dataset owner for detailed license information.
Our annotated instruction data is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bibtex
@article{li2023m3it,
title={M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning},
author={Lei Li and Yuwei Yin and Shicheng Li and Liang Chen and Peiyi Wang and Shuhuai Ren and Mukai Li and Yazheng Yang and Jingjing Xu and Xu Sun and Lingpeng Kong and Qi Liu},
journal={arXiv preprint arXiv:2306.04387},
year={2023}
}
```
### Contributions
M3IT is an open-source, large-scale Multi-modal, Multilingual Instruction Tuning dataset,
designed to enable the development of general-purpose multi-modal agents.
## References
- [1] Microsoft COCO: Common Objects in Context
- [2] TextCaps: a dataset for image captioning with reading comprehension
- [3] A Hierarchical Approach for Generating Descriptive Image Paragraphs
- [4] COCO-Text: Dataset and benchmark for text detection and recognition in natural images
- [5] Imagenet large scale visual recognition challenge
- [6] E-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
- [7] End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models
- [8] Quantifying visual image quality: A Bayesian view
- [9] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
- [10] Neural Module Networks
- [11] DocVQA: A dataset for vqa on document images
- [12] OCR-VQA: Visual Question Answering by Reading Text in Images
- [13] Scene Text Visual Question Answering
- [14] Towards VQA Models That Can Read
- [15] GQA: A new dataset for real-world visual reasoning and compositional question answering
- [16] OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge
- [17] A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge
- [18] Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
- [19] ViQuAE: a dataset for knowledge-based visual question answering about named entities
- [20] CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning
- [21] A Corpus of Natural Language for Visual Reasoning
- [22] From recognition to cognition: Visual Commonsense Reasoning
- [23] VisualMRC: Machine reading comprehension on document images
- [24] WinoGround: Probing vision and language models for visio-linguistic compositionality
- [25] Visual Storytelling
- [26] Visual Dialog
- [27] Multi30k: Multilingual english-german image descriptions
- [28] Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question
- [29] COCO-CN for cross-lingual image tagging, captioning, and retrieval
- [30] Adding Chinese Captions to Images
- [31] ChineseFoodNet: A large-scale image dataset for chinese food recognition
- [32] MMChat: Multi-Modal Chat Dataset on Social Media
- [33] The "Something Something" Video Database for Learning and Evaluating Visual Common Sense
- [34] Just Ask: Learning to answer questions from millions of narrated videos
- [35] Video Question Answering via Gradually Refined Attention over Appearance and Motion
- [36] ActivityNet-qa: A dataset for understanding complex web videos via question answering
- [37] MSR-VTT: A large video description dataset for bridging video and language |
laion/LAION-Audio-300M | laion | "2025-01-10T21:33:57Z" | 11,881 | 26 | [
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:webdataset",
"modality:audio",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | null | "2024-12-29T09:50:41Z" | ---
license: apache-2.0
---
|
mii-llm/requests | mii-llm | "2025-03-20T06:29:43Z" | 11,851 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-13T18:05:34Z" | ---
license: apache-2.0
---
|
AI4Math/MathVista | AI4Math | "2024-02-11T23:09:05Z" | 11,755 | 141 | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:visual-question-answering",
"task_categories:text-classification",
"task_ids:multiple-choice-qa",
"task_ids:closed-domain-qa",
"task_ids:open-domain-qa",
"task_ids:visual-question-answering",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"language:zh",
"language:fa",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.02255",
"region:us",
"multi-modal-qa",
"math-qa",
"figure-qa",
"geometry-qa",
"math-word-problem",
"textbook-qa",
"vqa",
"arithmetic-reasoning",
"statistical-reasoning",
"algebraic-reasoning",
"geometry-reasoning",
"numeric-common-sense",
"scientific-reasoning",
"logical-reasoning",
"geometry-diagram",
"synthetic-scene",
"chart",
"plot",
"scientific-figure",
"table",
"function-plot",
"abstract-scene",
"puzzle-test",
"document-image",
"medical-image",
"mathematics",
"science",
"chemistry",
"biology",
"physics",
"engineering",
"natural-science"
] | [
"multiple-choice",
"question-answering",
"visual-question-answering",
"text-classification"
] | "2023-10-15T17:49:10Z" | ---
annotations_creators:
- expert-generated
- found
language_creators:
- expert-generated
- found
language:
- en
- zh
- fa
license: cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
- text-classification
task_ids:
- multiple-choice-qa
- closed-domain-qa
- open-domain-qa
- visual-question-answering
- multi-class-classification
paperswithcode_id: mathvista
pretty_name: MathVista
tags:
- multi-modal-qa
- math-qa
- figure-qa
- geometry-qa
- math-word-problem
- textbook-qa
- vqa
- arithmetic-reasoning
- statistical-reasoning
- algebraic-reasoning
- geometry-reasoning
- numeric-common-sense
- scientific-reasoning
- logical-reasoning
- geometry-diagram
- synthetic-scene
- chart
- plot
- scientific-figure
- table
- function-plot
- abstract-scene
- puzzle-test
- document-image
- medical-image
- mathematics
- science
- chemistry
- biology
- physics
- engineering
- natural-science
configs:
- config_name: default
data_files:
- split: testmini
path: data/testmini-*
- split: test
path: data/test-*
dataset_info:
features:
- name: pid
dtype: string
- name: question
dtype: string
- name: image
dtype: string
- name: decoded_image
dtype: image
- name: choices
sequence: string
- name: unit
dtype: string
- name: precision
dtype: float64
- name: answer
dtype: string
- name: question_type
dtype: string
- name: answer_type
dtype: string
- name: metadata
struct:
- name: category
dtype: string
- name: context
dtype: string
- name: grade
dtype: string
- name: img_height
dtype: int64
- name: img_width
dtype: int64
- name: language
dtype: string
- name: skills
sequence: string
- name: source
dtype: string
- name: split
dtype: string
- name: task
dtype: string
- name: query
dtype: string
splits:
- name: testmini
num_bytes: 142635198.0
num_examples: 1000
- name: test
num_bytes: 648291350.22
num_examples: 5141
download_size: 885819490
dataset_size: 790926548.22
---
# Dataset Card for MathVista
- [Dataset Description](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#leaderboard)
- [Dataset Usage](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#dataset-usage)
- [Data Downloading](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#data-downloading)
- [Data Format](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#data-format)
- [Data Visualization](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#data-visualization)
- [Data Source](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#data-source)
- [Automatic Evaluation](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#automatic-evaluation)
- [License](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#license)
- [Citation](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#citation)
## Dataset Description
**MathVista** is a consolidated Mathematical reasoning benchmark within Visual contexts. It consists of **three newly created datasets, IQTest, FunctionQA, and PaperQA**, which address the missing visual domains and are tailored to evaluate logical reasoning on puzzle test figures, algebraic reasoning over functional plots, and scientific reasoning with academic paper figures, respectively. It also incorporates **9 MathQA datasets** and **19 VQA datasets** from the literature, which significantly enrich the diversity and complexity of visual perception and mathematical reasoning challenges within our benchmark. In total, **MathVista** includes **6,141 examples** collected from **31 different datasets**.
## Paper Information
- Paper: https://arxiv.org/abs/2310.02255
- Code: https://github.com/lupantech/MathVista
- Project: https://mathvista.github.io/
- Visualization: https://mathvista.github.io/#visualization
- Leaderboard: https://mathvista.github.io/#leaderboard
## Dataset Examples
Examples of our newly annotated datasets: IQTest, FunctionQA, and PaperQA:
<img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/our_new_3_datasets.png" style="zoom:40%;" />
<details>
<summary>🔍 Click to expand/collapse more examples</summary>
Examples of seven mathematical reasoning skills:
1. Arithmetic Reasoning
<img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/ari.png" style="zoom:40%;" />
2. Statistical Reasoning
<img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/sta.png" style="zoom:40%;" />
3. Algebraic Reasoning
<img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/alg.png" style="zoom:40%;" />
4. Geometry Reasoning
<img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/geo.png" style="zoom:40%;" />
5. Numeric common sense
<img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/num.png" style="zoom:40%;" />
6. Scientific Reasoning
<img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/sci.png" style="zoom:40%;" />
7. Logical Reasoning
<img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/log.png" style="zoom:40%;" />
</details>
## Leaderboard
🏆 The leaderboard for the *testmini* set (1,000 examples) is available [here](https://mathvista.github.io/#leaderboard).
🏆 The leaderboard for the *test* set (5,141 examples) and the automatic evaluation on [CodaLab](https://codalab.org/) are under construction.
## Dataset Usage
### Data Downloading
All the data examples were divided into two subsets: *testmini* and *test*.
- **testmini**: 1,000 examples used for model development, validation, or for those with limited computing resources.
- **test**: 5,141 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released.
You can download this dataset by the following command (make sure that you have installed [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart)):
```python
from datasets import load_dataset
dataset = load_dataset("AI4Math/MathVista")
```
Here are some examples of how to access the downloaded dataset:
```python
# print the first example on the testmini set
print(dataset["testmini"][0])
print(dataset["testmini"][0]['pid']) # print the problem id
print(dataset["testmini"][0]['question']) # print the question text
print(dataset["testmini"][0]['query']) # print the query text
print(dataset["testmini"][0]['image']) # print the image path
print(dataset["testmini"][0]['answer']) # print the answer
dataset["testmini"][0]['decoded_image'] # display the image
# print the first example on the test set
print(dataset["test"][0])
```
### Data Format
The dataset is provided in json format and contains the following attributes:
```json
{
"question": [string] The question text,
"image": [string] A file path pointing to the associated image,
"choices": [list] Choice options for multiple-choice problems. For free-form problems, this could be a 'none' value,
"unit": [string] The unit associated with the answer, e.g., "m^2", "years". If no unit is relevant, it can be a 'none' value,
"precision": [integer] The number of decimal places the answer should be rounded to,
"answer": [string] The correct answer for the problem,
"question_type": [string] The type of question: "multi_choice" or "free_form",
"answer_type": [string] The format of the answer: "text", "integer", "float", or "list",
"pid": [string] Problem ID, e.g., "1",
"metadata": {
"split": [string] Data split: "testmini" or "test",
"language": [string] Question language: "English", "Chinese", or "Persian",
"img_width": [integer] The width of the associated image in pixels,
"img_height": [integer] The height of the associated image in pixels,
"source": [string] The source dataset from which the problem was taken,
"category": [string] The category of the problem: "math-targeted-vqa" or "general-vqa",
"task": [string] The task of the problem, e.g., "geometry problem solving",
"context": [string] The visual context type of the associated image,
"grade": [string] The grade level of the problem, e.g., "high school",
"skills": [list] A list of mathematical reasoning skills that the problem tests
},
"query": [string] the query text used as input (prompt) for the evaluation model
}
```
### Data Visualization
🎰 You can explore the dataset in an interactive way [here](https://mathvista.github.io/#visualization).
<details>
<summary>Click to expand/collapse the visualization page screeshot.</summary>
<img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/data_visualizer.png" style="zoom:40%;" />
</details>
### Data Source
The **MathVista** dataset is derived from three newly collected datasets: IQTest, FunctionQA, and Paper, as well as 28 other source datasets. Details can be found in the [source.json](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/source.json) file. All these source datasets have been preprocessed and labeled for evaluation purposes.
### Automatic Evaluation
🔔 To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/lupantech/MathVista/tree/main).
## License
The new contributions to our dataset are distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license, including
- The creation of three datasets: IQTest, FunctionQA, and Paper;
- The filtering and cleaning of source datasets;
- The standard formalization of instances for evaluation purposes;
- The annotations of metadata.
The copyright of the images and the questions belongs to the original authors, and the source of every image and original question can be found in the `metadata` field and in the [source.json](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/source.json) file. Alongside this license, the following conditions apply:
- **Purpose:** The dataset was primarily designed for use as a test set.
- **Commercial Use:** The dataset can be used commercially as a test set, but using it as a training set is prohibited. By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license.
## Citation
If you use the **MathVista** dataset in your work, please kindly cite the paper using this BibTeX:
```
@inproceedings{lu2024mathvista,
author = {Lu, Pan and Bansal, Hritik and Xia, Tony and Liu, Jiacheng and Li, Chunyuan and Hajishirzi, Hannaneh and Cheng, Hao and Chang, Kai-Wei and Galley, Michel and Gao, Jianfeng},
title = {MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2024}
}
``` |
nyu-visionx/Cambrian-10M | nyu-visionx | "2024-07-08T04:34:51Z" | 11,755 | 108 | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2406.16860",
"region:us"
] | [
"visual-question-answering",
"question-answering"
] | "2024-05-30T03:27:31Z" | ---
task_categories:
- visual-question-answering
- question-answering
language:
- en
size_categories:
- 1M<n<10M
license: apache-2.0
---
# Cambrian-10M Dataset
**Please see paper & website for more information:**
- https://cambrian-mllm.github.io/
- https://arxiv.org/abs/2406.16860
## Overview
Cambrian-10M is a comprehensive dataset designed for instruction tuning, particularly in multimodal settings involving visual interaction data. The dataset is crafted to address the scarcity of high-quality multimodal instruction-tuning data and to maintain the language abilities of multimodal large language models (LLMs).
## Data Collection
### Multimodal Data Sources
Unlike language data, multimodal instruction-tuning data is much rarer and harder to collect. To address this, we leverage existing multimodal benchmarks and datasets involving visual interaction data, such as Visual Question Answering (VQA) and Optical Character Recognition (OCR) data. This approach helps mitigate the catastrophic forgetting commonly observed when fine-tuning multimodal LLMs.
### Language-Only Instruction-Following Data
To ensure the preservation of language capabilities, we also collect a small volume of high-quality language-only instruction-following data from the community.
### Targeted Internet Data Collection Engine
We introduce a data engine designed to create large-scale, reliable, high-quality knowledge-based multimodal instruction tuning data. The engine works as follows:
1. **Field and Subfield Selection**: The engine selects a target field and subfield, such as “Physics”.
2. **Topic Identification**: An LLM like GPT-4 identifies topics within the field (e.g., “Newton’s Laws”).
3. **Reliable Source Search**: The engine searches reliable sources like Wikipedia for each topic.
4. **Text-Image Association Extraction**: The parser extracts image-caption-text tuples from the sources.
5. **Q&A Pair Generation**: The caption-text is fed to an LLM, such as GPT-3.5, to generate instruction-type Q&A pairs about the image.
These Q&A pairs, along with the images, form our VQA dataset.
### GPT Rewriting
We also incorporate recent MLLMs such as GPT-4v and GPT-4o to generate extended responses and free-form instruction tuning data. To play with gpt generated data, use
[gpt4v_77k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4v_77k.jsonl), Curated [gpt4o_60k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4o_60k.jsonl)
- [gpt4v_77k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4v_77k.jsonl) contains more extended responses from Cambrian-10M.
- [gpt4o_60k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4o_60k.jsonl) contains more creative data in visual interactions.
## Cambrian-10M Composition
The Cambrian-10M dataset consists of approximately 9.784 million data points, offering a diverse range of data for various research applications. The composition of the dataset is visualized in Fig. 9.
## Cambrian-7M
We make an initial effort to study data curation. In particular, we find the following data ratio to perform most optimally
- **Language**: 21.00%
- **General**: 34.52%
- **OCR**: 27.22%
- **Counting**: 8.71%
- **Math**: 7.20%
- **Code**: 0.87%
- **Science**: 0.88%

## Getting Started with Cambrian Data
Before you start, ensure you have sufficient storage space to download and process the data.
Cambrian-10M contains a total of 10 million images collected from previous datasets, an internet data engine, and GPT-generated instruction tuning data. Follow these steps to get started:
1. **Download the Data Repository**
Download the data repository. Note that due to Hugging Face policy constraints, the data folder is archived into tar files. We also split the `allava` and `data_engine` data into smaller tar files because they exceed the 50 GB size limit.
2. **Merge Tar Files**
To explore the Cambrian-10M dataset, first merge the different parts of `allava` and `data_engine` together:
```bash
python merge_tars.py
```
3. **Extract Tar Files**
Then, extract all the tar files into the current directory:
```bash
python extract.py
```
4. **Training with Cambrian**
You can train with the raw [Cambrian10M](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/Cambrian10M.jsonl), Curated [Cambrian7M](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/Cambrian7M.jsonl). We recommend using
the Curated [Cambrian7M with system prompt](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/blob/main/jsons/Cambrian7M_withsystemprompt.jsonl) that also alleviates 'answer machine' problem. |
ArmelR/the-pile-splitted | ArmelR | "2023-09-06T09:53:16Z" | 11,739 | 22 | [
"size_categories:10M<n<100M",
"format:arrow",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2101.00027",
"arxiv:2201.07311",
"region:us"
] | null | "2023-07-30T14:21:26Z" | ---
configs:
- config_name: all
data_files:
- split: train
path:
- "data/ArXiv/train/*.arrow"
- "data/BookCorpus2/train/*.arrow"
- "data/Books3/train/*.arrow"
- "data/DM Mathematics/train/*.arrow"
- "data/Enron Emails/train/*.arrow"
- "data/EuroParl/train/*.arrow"
- "data/FreeLaw/train/*.arrow"
- "data/Github/train/*.arrow"
- "data/Gutenberg (PG-19)/train/*.arrow"
- "data/HackerNews/train/*.arrow"
- "data/NIH ExPorter/train/*.arrow"
- "data/OpenSubtitles/train/*.arrow"
- "data/OpenWebText2/train/*.arrow"
- "data/PhilPapers/train/*.arrow"
- "data/Pile-CC/train/*.arrow"
- "data/PubMed Abstracts/train/*.arrow"
- "data/PubMed Central/train/*.arrow"
- "data/StackExchange/train/*.arrow"
- "data/UPSTO Backgrounds/train/*.arrow"
- "data/Ubuntu IRC/train/*.arrow"
- "data/Wikipedia (en)/train/*.arrow"
- "data/YoutubeSubtitles/train/*.arrow"
- split: test
path:
- "data/ArXiv/test/*.arrow"
- "data/BookCorpus2/test/*.arrow"
- "data/Books3/test/*.arrow"
- "data/DM Mathematics/test/*.arrow"
- "data/Enron Emails/test/*.arrow"
- "data/EuroParl/test/*.arrow"
- "data/FreeLaw/test/*.arrow"
- "data/Github/test/*.arrow"
- "data/Gutenberg (PG-19)/test/*.arrow"
- "data/HackerNews/test/*.arrow"
- "data/NIH ExPorter/test/*.arrow"
- "data/OpenSubtitles/test/*.arrow"
- "data/OpenWebText2/test/*.arrow"
- "data/PhilPapers/test/*.arrow"
- "data/Pile-CC/test/*.arrow"
- "data/PubMed Abstracts/test/*.arrow"
- "data/PubMed Central/test/*.arrow"
- "data/StackExchange/test/*.arrow"
- "data/UPSTO Backgrounds/test/*.arrow"
- "data/Ubuntu IRC/test/*.arrow"
- "data/Wikipedia (en)/test/*.arrow"
- "data/YoutubeSubtitles/test/*.arrow"
default: true
- config_name: ArXiv
data_files:
- split: train
path: "data/ArXiv/train/*.arrow"
- split: test
path: "data/ArXiv/test/*.arrow"
- config_name: BookCorpus2
data_files:
- split: train
path: "data/BookCorpus2/train/*.arrow"
- split: test
path: "data/BookCorpus2/test/*.arrow"
- config_name: Books3
data_files:
- split: train
path: "data/Books3/train/*.arrow"
- split: test
path: "data/Books3/test/*.arrow"
- config_name: DM Mathematics
data_files:
- split: train
path: "data/DM Mathematics/train/*.arrow"
- split: test
path: "data/DM Mathematics/test/*.arrow"
- config_name: Enron Emails
data_files:
- split: train
path: "data/Enron Emails/train/*.arrow"
- split: test
path: "data/Enron Emails/test/*.arrow"
- config_name: EuroParl
data_files:
- split: train
path: "data/EuroParl/train/*.arrow"
- split: test
path: "data/EuroParl/test/*.arrow"
- config_name: FreeLaw
data_files:
- split: train
path: "data/FreeLaw/train/*.arrow"
- split: test
path: "data/FreeLaw/test/*.arrow"
- config_name: Github
data_files:
- split: train
path: "data/Github/train/*.arrow"
- split: test
path: "data/Github/test/*.arrow"
- config_name: Gutenberg (PG-19)
data_files:
- split: train
path: "data/Gutenberg (PG-19)/train/*.arrow"
- split: test
path: "data/Gutenberg (PG-19)/test/*.arrow"
- config_name: HackerNews
data_files:
- split: train
path: "data/HackerNews/train/*.arrow"
- split: test
path: "data/HackerNews/test/*.arrow"
- config_name: NIH ExPorter
data_files:
- split: train
path: "data/NIH ExPorter/train/*.arrow"
- split: test
path: "data/NIH ExPorter/test/*.arrow"
- config_name: OpenSubtitles
data_files:
- split: train
path: "data/OpenSubtitles/train/*.arrow"
- split: test
path: "data/OpenSubtitles/test/*.arrow"
- config_name: OpenWebText2
data_files:
- split: train
path: "data/OpenWebText2/train/*.arrow"
- split: test
path: "data/OpenWebText2/test/*.arrow"
- config_name: PhilPapers
data_files:
- split: train
path: "data/PhilPapers/train/*.arrow"
- split: test
path: "data/PhilPapers/test/*.arrow"
- config_name: Pile-CC
data_files:
- split: train
path: "data/Pile-CC/train/*.arrow"
- split: test
path: "data/Pile-CC/test/*.arrow"
- config_name: PubMed Abstracts
data_files:
- split: train
path: "data/PubMed Abstracts/train/*.arrow"
- split: test
path: "data/PubMed Abstracts/test/*.arrow"
- config_name: PubMed Central
data_files:
- split: train
path: "data/PubMed Central/train/*.arrow"
- split: test
path: "data/PubMed Central/test/*.arrow"
- config_name: StackExchange
data_files:
- split: train
path: "data/StackExchange/train/*.arrow"
- split: test
path: "data/StackExchange/test/*.arrow"
- config_name: UPSTO Backgrounds
data_files:
- split: train
path: "data/UPSTO Backgrounds/train/*.arrow"
- split: test
path: "data/UPSTO Backgrounds/test/*.arrow"
- config_name: Ubuntu IRC
data_files:
- split: train
path: "data/Ubuntu IRC/train/*.arrow"
- split: test
path: "data/Ubuntu IRC/test/*.arrow"
- config_name: Wikipedia (en)
data_files:
- split: train
path: "data/Wikipedia (en)/train/*.arrow"
- split: test
path: "data/Wikipedia (en)/test/*.arrow"
- config_name: YoutubeSubtitles
data_files:
- split: train
path: "data/YoutubeSubtitles/train/*.arrow"
- split: test
path: "data/YoutubeSubtitles/test/*.arrow"
---
# Dataset description
[The pile](https://arxiv.org/abs/2101.00027) is an 800GB dataset of english text
designed by EleutherAI to train large-scale language models. The original version of
the dataset can be found [here](https://huggingface.co/datasets/EleutherAI/pile).
The dataset is divided into 22 smaller high-quality datasets. For more information
each of them, please refer to [the datasheet for the pile](https://arxiv.org/abs/2201.07311).
However, the current version of the dataset, available on the Hub, is not splitted accordingly.
We had to solve this problem in order to improve the user experience when it comes to deal with
the pile via the hub.
Here is an instance of the pile
```
{
'meta': {'pile_set_name': 'Pile-CC'},
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
}
```
We used the `meta` column to properly divide the dataset in subsets. Each instance `example` belongs to the subset
`domain` and `domain = example['meta']['pile_set_name']`. By doing this, we were able to create a [new version of the pile](https://huggingface.co/datasets/ArmelR/sharded-pile)
that is properly divided, each instance having a new column `domain`.
We further splitted each subset in train/test (97%/3%) to build the current dataset which the following structure
```
data
ArXiv
train
test
BookCorpus2
train
test
Books3
train
test
```
# Usage
```python
from datasets import load_dataset
dataset = load_dataset(
"ArmelR/the-pile-splitted",
subset_of_interest,
num_proc=8
)
```
Using `subset_of_interest = "default"` will load the whole dataset.
|
dominguesm/CC-MAIN-2023-23 | dominguesm | "2023-09-17T00:02:06Z" | 11,727 | 3 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:pt",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2023-09-16T20:32:49Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: crawl_timestamp
dtype: timestamp[ns, tz=UTC]
splits:
- name: train
num_bytes: 97584560119
num_examples: 16899389
download_size: 18490153155
dataset_size: 97584560119
license: cc-by-4.0
task_categories:
- text-generation
- fill-mask
language:
- pt
pretty_name: CC-MAIN-2023-23-PT
size_categories:
- 10B<n<100B
---
# Dataset Card for "CC-MAIN-2023-23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wecover/OPUS_Tatoeba | wecover | "2024-02-03T10:13:01Z" | 11,701 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-31T07:16:25Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: '*/*/train.parquet'
- split: valid
path: '*/*/valid.parquet'
- config_name: af
data_files:
- split: train
path: '*/*af*/train.parquet'
- split: valid
path: '*/*af*/valid.parquet'
- config_name: ar
data_files:
- split: train
path: '*/*ar*/train.parquet'
- split: valid
path: '*/*ar*/valid.parquet'
- config_name: ca
data_files:
- split: train
path: '*/*ca*/train.parquet'
- split: valid
path: '*/*ca*/valid.parquet'
- config_name: cs
data_files:
- split: train
path: '*/*cs*/train.parquet'
- split: valid
path: '*/*cs*/valid.parquet'
- config_name: de
data_files:
- split: train
path: '*/*de*/train.parquet'
- split: valid
path: '*/*de*/valid.parquet'
- config_name: en
data_files:
- split: train
path: '*/*en*/train.parquet'
- split: valid
path: '*/*en*/valid.parquet'
- config_name: eo
data_files:
- split: train
path: '*/*eo*/train.parquet'
- split: valid
path: '*/*eo*/valid.parquet'
- config_name: es
data_files:
- split: train
path: '*/*es*/train.parquet'
- split: valid
path: '*/*es*/valid.parquet'
- config_name: fi
data_files:
- split: train
path: '*/*fi*/train.parquet'
- split: valid
path: '*/*fi*/valid.parquet'
- config_name: fr
data_files:
- split: train
path: '*/*fr*/train.parquet'
- split: valid
path: '*/*fr*/valid.parquet'
- config_name: ga
data_files:
- split: train
path: '*/*ga*/train.parquet'
- split: valid
path: '*/*ga*/valid.parquet'
- config_name: it
data_files:
- split: train
path: '*/*it*/train.parquet'
- split: valid
path: '*/*it*/valid.parquet'
- config_name: ja
data_files:
- split: train
path: '*/*ja*/train.parquet'
- split: valid
path: '*/*ja*/valid.parquet'
- config_name: la
data_files:
- split: train
path: '*/*la*/train.parquet'
- split: valid
path: '*/*la*/valid.parquet'
- config_name: nl
data_files:
- split: train
path: '*/*nl*/train.parquet'
- split: valid
path: '*/*nl*/valid.parquet'
- config_name: pl
data_files:
- split: train
path: '*/*pl*/train.parquet'
- split: valid
path: '*/*pl*/valid.parquet'
- config_name: pt
data_files:
- split: train
path: '*/*pt*/train.parquet'
- split: valid
path: '*/*pt*/valid.parquet'
- config_name: ro
data_files:
- split: train
path: '*/*ro*/train.parquet'
- split: valid
path: '*/*ro*/valid.parquet'
- config_name: ru
data_files:
- split: train
path: '*/*ru*/train.parquet'
- split: valid
path: '*/*ru*/valid.parquet'
- config_name: sv
data_files:
- split: train
path: '*/*sv*/train.parquet'
- split: valid
path: '*/*sv*/valid.parquet'
- config_name: tr
data_files:
- split: train
path: '*/*tr*/train.parquet'
- split: valid
path: '*/*tr*/valid.parquet'
- config_name: uk
data_files:
- split: train
path: '*/*uk*/train.parquet'
- split: valid
path: '*/*uk*/valid.parquet'
- config_name: xh
data_files:
- split: train
path: '*/*xh*/train.parquet'
- split: valid
path: '*/*xh*/valid.parquet'
- config_name: yi
data_files:
- split: train
path: '*/*yi*/train.parquet'
- split: valid
path: '*/*yi*/valid.parquet'
- config_name: am
data_files:
- split: train
path: '*/*am*/train.parquet'
- split: valid
path: '*/*am*/valid.parquet'
- config_name: bg
data_files:
- split: train
path: '*/*bg*/train.parquet'
- split: valid
path: '*/*bg*/valid.parquet'
- config_name: da
data_files:
- split: train
path: '*/*da*/train.parquet'
- split: valid
path: '*/*da*/valid.parquet'
- config_name: el
data_files:
- split: train
path: '*/*el*/train.parquet'
- split: valid
path: '*/*el*/valid.parquet'
- config_name: he
data_files:
- split: train
path: '*/*he*/train.parquet'
- split: valid
path: '*/*he*/valid.parquet'
- config_name: hu
data_files:
- split: train
path: '*/*hu*/train.parquet'
- split: valid
path: '*/*hu*/valid.parquet'
- config_name: ko
data_files:
- split: train
path: '*/*ko*/train.parquet'
- split: valid
path: '*/*ko*/valid.parquet'
- config_name: ku
data_files:
- split: train
path: '*/*ku*/train.parquet'
- split: valid
path: '*/*ku*/valid.parquet'
- config_name: lt
data_files:
- split: train
path: '*/*lt*/train.parquet'
- split: valid
path: '*/*lt*/valid.parquet'
- config_name: mk
data_files:
- split: train
path: '*/*mk*/train.parquet'
- split: valid
path: '*/*mk*/valid.parquet'
- config_name: ug
data_files:
- split: train
path: '*/*ug*/train.parquet'
- split: valid
path: '*/*ug*/valid.parquet'
- config_name: ur
data_files:
- split: train
path: '*/*ur*/train.parquet'
- split: valid
path: '*/*ur*/valid.parquet'
- config_name: as
data_files:
- split: train
path: '*/*as*/train.parquet'
- split: valid
path: '*/*as*/valid.parquet'
- config_name: bn
data_files:
- split: train
path: '*/*bn*/train.parquet'
- split: valid
path: '*/*bn*/valid.parquet'
- config_name: hi
data_files:
- split: train
path: '*/*hi*/train.parquet'
- split: valid
path: '*/*hi*/valid.parquet'
- config_name: az
data_files:
- split: train
path: '*/*az*/train.parquet'
- split: valid
path: '*/*az*/valid.parquet'
- config_name: kk
data_files:
- split: train
path: '*/*kk*/train.parquet'
- split: valid
path: '*/*kk*/valid.parquet'
- config_name: be
data_files:
- split: train
path: '*/*be*/train.parquet'
- split: valid
path: '*/*be*/valid.parquet'
- config_name: et
data_files:
- split: train
path: '*/*et*/train.parquet'
- split: valid
path: '*/*et*/valid.parquet'
- config_name: sl
data_files:
- split: train
path: '*/*sl*/train.parquet'
- split: valid
path: '*/*sl*/valid.parquet'
- config_name: sr
data_files:
- split: train
path: '*/*sr*/train.parquet'
- split: valid
path: '*/*sr*/valid.parquet'
- config_name: vi
data_files:
- split: train
path: '*/*vi*/train.parquet'
- split: valid
path: '*/*vi*/valid.parquet'
- config_name: id
data_files:
- split: train
path: '*/*id*/train.parquet'
- split: valid
path: '*/*id*/valid.parquet'
- config_name: br
data_files:
- split: train
path: '*/*br*/train.parquet'
- split: valid
path: '*/*br*/valid.parquet'
- config_name: bs
data_files:
- split: train
path: '*/*bs*/train.parquet'
- split: valid
path: '*/*bs*/valid.parquet'
- config_name: hr
data_files:
- split: train
path: '*/*hr*/train.parquet'
- split: valid
path: '*/*hr*/valid.parquet'
- config_name: gl
data_files:
- split: train
path: '*/*gl*/train.parquet'
- split: valid
path: '*/*gl*/valid.parquet'
- config_name: fy
data_files:
- split: train
path: '*/*fy*/train.parquet'
- split: valid
path: '*/*fy*/valid.parquet'
- config_name: ka
data_files:
- split: train
path: '*/*ka*/train.parquet'
- split: valid
path: '*/*ka*/valid.parquet'
- config_name: tl
data_files:
- split: train
path: '*/*tl*/train.parquet'
- split: valid
path: '*/*tl*/valid.parquet'
- config_name: cy
data_files:
- split: train
path: '*/*cy*/train.parquet'
- split: valid
path: '*/*cy*/valid.parquet'
- config_name: is
data_files:
- split: train
path: '*/*is*/train.parquet'
- split: valid
path: '*/*is*/valid.parquet'
- config_name: eu
data_files:
- split: train
path: '*/*eu*/train.parquet'
- split: valid
path: '*/*eu*/valid.parquet'
- config_name: gd
data_files:
- split: train
path: '*/*gd*/train.parquet'
- split: valid
path: '*/*gd*/valid.parquet'
- config_name: ha
data_files:
- split: train
path: '*/*ha*/train.parquet'
- split: valid
path: '*/*ha*/valid.parquet'
- config_name: hy
data_files:
- split: train
path: '*/*hy*/train.parquet'
- split: valid
path: '*/*hy*/valid.parquet'
- config_name: km
data_files:
- split: train
path: '*/*km*/train.parquet'
- split: valid
path: '*/*km*/valid.parquet'
- config_name: ky
data_files:
- split: train
path: '*/*ky*/train.parquet'
- split: valid
path: '*/*ky*/valid.parquet'
- config_name: mn
data_files:
- split: train
path: '*/*mn*/train.parquet'
- split: valid
path: '*/*mn*/valid.parquet'
- config_name: mr
data_files:
- split: train
path: '*/*mr*/train.parquet'
- split: valid
path: '*/*mr*/valid.parquet'
- config_name: my
data_files:
- split: train
path: '*/*my*/train.parquet'
- split: valid
path: '*/*my*/valid.parquet'
- config_name: th
data_files:
- split: train
path: '*/*th*/train.parquet'
- split: valid
path: '*/*th*/valid.parquet'
- config_name: uz
data_files:
- split: train
path: '*/*uz*/train.parquet'
- split: valid
path: '*/*uz*/valid.parquet'
- config_name: jv
data_files:
- split: train
path: '*/*jv*/train.parquet'
- split: valid
path: '*/*jv*/valid.parquet'
- config_name: kn
data_files:
- split: train
path: '*/*kn*/train.parquet'
- split: valid
path: '*/*kn*/valid.parquet'
- config_name: lo
data_files:
- split: train
path: '*/*lo*/train.parquet'
- split: valid
path: '*/*lo*/valid.parquet'
- config_name: mg
data_files:
- split: train
path: '*/*mg*/train.parquet'
- split: valid
path: '*/*mg*/valid.parquet'
- config_name: ml
data_files:
- split: train
path: '*/*ml*/train.parquet'
- split: valid
path: '*/*ml*/valid.parquet'
- config_name: or
data_files:
- split: train
path: '*/*or*/train.parquet'
- split: valid
path: '*/*or*/valid.parquet'
- config_name: pa
data_files:
- split: train
path: '*/*pa*/train.parquet'
- split: valid
path: '*/*pa*/valid.parquet'
- config_name: ps
data_files:
- split: train
path: '*/*ps*/train.parquet'
- split: valid
path: '*/*ps*/valid.parquet'
- config_name: sa
data_files:
- split: train
path: '*/*sa*/train.parquet'
- split: valid
path: '*/*sa*/valid.parquet'
- config_name: sd
data_files:
- split: train
path: '*/*sd*/train.parquet'
- config_name: si
data_files:
- split: train
path: '*/*si*/train.parquet'
- split: valid
path: '*/*si*/valid.parquet'
- config_name: so
data_files:
- split: train
path: '*/*so*/train.parquet'
- split: valid
path: '*/*so*/valid.parquet'
- config_name: sq
data_files:
- split: train
path: '*/*sq*/train.parquet'
- split: valid
path: '*/*sq*/valid.parquet'
- config_name: su
data_files:
- split: train
path: '*/*su*/train.parquet'
- split: valid
path: '*/*su*/valid.parquet'
- config_name: ta
data_files:
- split: train
path: '*/*ta*/train.parquet'
- split: valid
path: '*/*ta*/valid.parquet'
- config_name: te
data_files:
- split: train
path: '*/*te*/train.parquet'
- split: valid
path: '*/*te*/valid.parquet'
---
|
picollect/danbooru | picollect | "2024-11-15T02:46:27Z" | 11,699 | 4 | [
"language:en",
"license:other",
"size_categories:10M<n<100M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us",
"danbooru",
"anime"
] | null | "2024-11-06T07:12:33Z" | ---
license: other
language:
- en
tags:
- danbooru
- anime
pretty_name: Danbooru 2024 Dataset
size_categories:
- 1M<n<10M
---
# Danbooru 2024 Dataset
# Danbooru 2024 数据集
A collection of images from Danbooru website, organized and packaged by ID sequence. This dataset is for research and learning purposes only.
本数据集收集了来自 Danbooru 网站的图像,按 ID 顺序组织打包。该数据集仅用于研究和学习目的。
## Dataset Description
## 数据集描述
This dataset contains image resources from Danbooru website, updated to ID 8380648 (Update time: 2024-11-03).
本数据集包含来自 Danbooru 网站的图像资源,更新至 ID 8380648(更新时间:2024-11-03)。
### Data Organization
### 数据组织
- Images are packaged into compressed files, 1000 images per archive
- File naming format: `{start_id}.tar`
- Example: `2000.tar` contains images with IDs from 2000 to 2999
- 图像打包为压缩文件,每个存档包含 1000 张图像
- 文件命名格式:`{start_id}.tar`
- 示例:`2000.tar` 包含 ID 从 2000 到 2999 的图像
### Technical Details
### 技术细节
- Image Format: Original format
- File Organization: Sequential TAR packaging
- ID Range: 1 ~ 8380648
- 图像格式:原始格式
- 文件组织:顺序 TAR 打包
- ID 范围:1 ~ 8380648
## Usage Instructions
## 使用说明
1. Images within each archive are named by their IDs
2. Metadata can be queried from Danbooru database using corresponding IDs
zh
1. 存档中的图像以其 ID 命名
2. 可使用相应的 ID 从 Danbooru 数据库查询元数据
## License
## 许可证
This dataset is released under the following terms:
本数据集在以下条款下发布:
1. Academic and Research Use
学术和研究使用
- This dataset may only be used for academic research, learning, and non-commercial purposes
- 本数据集仅可用于学术研究、学习和非商业目的
2. Restrictions
限制条款
- Commercial use is strictly prohibited
- Redistribution or resale of the dataset is not permitted
- Any derivative works must be shared under the same terms
- 严格禁止商业使用
- 不允许重新分发或转售数据集
- 任何衍生作品必须在相同条款下共享
3. Attribution
署名要求
- Users must cite this dataset when used in research or publications
- Any derivative works must acknowledge the original source
- 在研究或出版物中使用时必须引用本数据集
- 任何衍生作品必须注明原始来源
4. Disclaimer
免责声明
- The dataset is provided "as is" without any warranty
- The creators are not liable for any damages or losses arising from its use
- Users are solely responsible for ensuring compliance with local laws and regulations
- 数据集按"原样"提供,不提供任何保证
- 创建者不对使用过程中产生的任何损害或损失负责
- 用户需自行负责确保符合当地法律法规
5. Termination
终止条款
- This license automatically terminates if you violate any of these terms
- Upon termination, you must cease all use of the dataset
- 如果违反任何这些条款,本许可证将自动终止
- 终止后,您必须停止使用本数据集
By using this dataset, you agree to be bound by these terms.
使用本数据集即表示您同意受这些条款的约束。
## Important Notes
## 重要提示
- Ensure legal compliance when using the dataset
- Review relevant data usage policies and guidelines before use
- Consult legal professionals if you have questions about usage rights
- 使用数据集时确保遵守法律
- 使用前请查看相关数据使用政策和指南
- 如对使用权有疑问,请咨询法律专业人士
---
**Notice:** Users must strictly comply with local laws and regulations when using this dataset. Users bear full responsibility for any issues arising from improper use.
**注意:** 用户在使用本数据集时必须严格遵守当地法律法规。用户对因不当使用而产生的任何问题承担全部责任。 |
lmms-lab/DocVQA | lmms-lab | "2024-04-18T05:14:35Z" | 11,652 | 35 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2007.00398",
"region:us"
] | null | "2024-01-22T16:29:32Z" | ---
license: apache-2.0
dataset_info:
- config_name: DocVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: question_types
sequence: string
- name: image
dtype: image
- name: docId
dtype: int64
- name: ucsf_document_id
dtype: string
- name: ucsf_document_page_no
dtype: string
- name: answers
sequence: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 5659006943.631
# num_examples: 39463
- name: validation
num_bytes: 2532447207.066
num_examples: 5349
- name: test
num_bytes: 2500408525.732
num_examples: 5188
download_size: 9555791945
dataset_size: 10691862676.428999
- config_name: InfographicVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: answer_type
sequence: string
- name: image
dtype: image
- name: image_url
dtype: string
- name: operation/reasoning
sequence: string
- name: ocr
dtype: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 11559694546.32
# num_examples: 23946
- name: validation
num_bytes: 1863177404.253
num_examples: 2801
- name: test
num_bytes: 1851304047.712
num_examples: 3288
download_size: 2544892079
dataset_size: 15274175998.285
configs:
- config_name: DocVQA
data_files:
# - split: train
# path: DocVQA/train-*
- split: validation
path: DocVQA/validation-*
- split: test
path: DocVQA/test-*
- config_name: InfographicVQA
data_files:
# - split: train
# path: InfographicVQA/train-*
- split: validation
path: InfographicVQA/validation-*
- split: test
path: InfographicVQA/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [DocVQA](https://arxiv.org/abs/2007.00398). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{mathew2020docvqa,
title={DocVQA: A Dataset for VQA on Document Images. CoRR abs/2007.00398 (2020)},
author={Mathew, Minesh and Karatzas, Dimosthenis and Manmatha, R and Jawahar, CV},
journal={arXiv preprint arXiv:2007.00398},
year={2020}
}
```
|
M-A-D/Mixed-Arabic-Datasets-Repo | M-A-D | "2023-10-16T21:25:35Z" | 11,646 | 32 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:translation",
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:fill-mask",
"language:ar",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"question-answering",
"translation",
"summarization",
"conversational",
"text-generation",
"text2text-generation",
"fill-mask"
] | "2023-08-27T01:19:21Z" | ---
language:
- ar
size_categories:
- 1B<n<10B
task_categories:
- text-classification
- question-answering
- translation
- summarization
- conversational
- text-generation
- text2text-generation
- fill-mask
pretty_name: Mixed Arabic Datasets (MAD) Corpus
dataset_info:
- config_name: Ara--Ali-C137--Hindawi-Books-dataset
features:
- name: BookLink
dtype: string
- name: BookName
dtype: string
- name: AuthorName
dtype: string
- name: AboutBook
dtype: string
- name: ChapterLink
dtype: string
- name: ChapterName
dtype: string
- name: ChapterText
dtype: string
- name: AboutAuthor
dtype: string
splits:
- name: train
num_bytes: 1364854259
num_examples: 49821
download_size: 494678002
dataset_size: 1364854259
- config_name: Ara--Goud--Goud-sum
features:
- name: article
dtype: string
- name: headline
dtype: string
- name: categories
dtype: string
splits:
- name: train
num_bytes: 288296544
num_examples: 139288
download_size: 147735776
dataset_size: 288296544
- config_name: Ara--J-Mourad--MNAD.v1
features:
- name: Title
dtype: string
- name: Body
dtype: string
- name: Category
dtype: string
splits:
- name: train
num_bytes: 1101921980
num_examples: 418563
download_size: 527154122
dataset_size: 1101921980
- config_name: Ara--JihadZa--IADD
features:
- name: Sentence
dtype: string
- name: Region
dtype: string
- name: DataSource
dtype: string
- name: Country
dtype: string
splits:
- name: train
num_bytes: 19167070
num_examples: 135804
download_size: 8644491
dataset_size: 19167070
- config_name: Ara--LeMGarouani--MAC-corpus
features:
- name: tweets
dtype: string
- name: type
dtype: string
- name: class
dtype: string
splits:
- name: train
num_bytes: 1945646
num_examples: 18087
download_size: 866198
dataset_size: 1945646
- config_name: Ara--MBZUAI--Bactrian-X
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: id
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 66093524
num_examples: 67017
download_size: 33063779
dataset_size: 66093524
- config_name: Ara--OpenAssistant--oasst1
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int32
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
dtype: 'null'
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 58168
num_examples: 56
download_size: 30984
dataset_size: 58168
- config_name: Ara--Wikipedia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3052201469
num_examples: 1205403
download_size: 1316212231
dataset_size: 3052201469
- config_name: Ara--bigscience--xP3
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4727881680
num_examples: 2148955
download_size: 2805060725
dataset_size: 4727881680
- config_name: Ara--cardiffnlp--tweet_sentiment_multilingual
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 306108
num_examples: 1839
- name: validation
num_bytes: 53276
num_examples: 324
- name: test
num_bytes: 141536
num_examples: 870
download_size: 279900
dataset_size: 500920
- config_name: Ara--miracl--miracl
features:
- name: query_id
dtype: string
- name: query
dtype: string
- name: positive_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: negative_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 32012083
num_examples: 3495
download_size: 15798509
dataset_size: 32012083
- config_name: Ara--mustapha--QuranExe
features:
- name: text
dtype: string
- name: resource_name
dtype: string
- name: verses_keys
dtype: string
splits:
- name: train
num_bytes: 133108687
num_examples: 49888
download_size: 58769417
dataset_size: 133108687
- config_name: Ara--pain--Arabic-Tweets
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 41639770853
num_examples: 202700438
download_size: 22561651700
dataset_size: 41639770853
- config_name: Ara--saudinewsnet
features:
- name: source
dtype: string
- name: url
dtype: string
- name: date_extracted
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 103654009
num_examples: 31030
download_size: 49117164
dataset_size: 103654009
- config_name: Ary--AbderrahmanSkiredj1--Darija-Wikipedia
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8104410
num_examples: 4862
download_size: 3229966
dataset_size: 8104410
- config_name: Ary--Ali-C137--Darija-Stories-Dataset
features:
- name: ChapterName
dtype: string
- name: ChapterLink
dtype: string
- name: Author
dtype: string
- name: Text
dtype: string
- name: Tags
dtype: int64
splits:
- name: train
num_bytes: 476926644
num_examples: 6142
download_size: 241528641
dataset_size: 476926644
- config_name: Ary--Wikipedia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10007364
num_examples: 6703
download_size: 4094377
dataset_size: 10007364
- config_name: Arz--Wikipedia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1364641408
num_examples: 1617770
download_size: 306420318
dataset_size: 1364641408
configs:
- config_name: Ara--Ali-C137--Hindawi-Books-dataset
data_files:
- split: train
path: Ara--Ali-C137--Hindawi-Books-dataset/train-*
- config_name: Ara--Goud--Goud-sum
data_files:
- split: train
path: Ara--Goud--Goud-sum/train-*
- config_name: Ara--J-Mourad--MNAD.v1
data_files:
- split: train
path: Ara--J-Mourad--MNAD.v1/train-*
- config_name: Ara--JihadZa--IADD
data_files:
- split: train
path: Ara--JihadZa--IADD/train-*
- config_name: Ara--LeMGarouani--MAC-corpus
data_files:
- split: train
path: Ara--LeMGarouani--MAC-corpus/train-*
- config_name: Ara--MBZUAI--Bactrian-X
data_files:
- split: train
path: Ara--MBZUAI--Bactrian-X/train-*
- config_name: Ara--OpenAssistant--oasst1
data_files:
- split: train
path: Ara--OpenAssistant--oasst1/train-*
- config_name: Ara--Wikipedia
data_files:
- split: train
path: Ara--Wikipedia/train-*
- config_name: Ara--bigscience--xP3
data_files:
- split: train
path: Ara--bigscience--xP3/train-*
- config_name: Ara--cardiffnlp--tweet_sentiment_multilingual
data_files:
- split: train
path: Ara--cardiffnlp--tweet_sentiment_multilingual/train-*
- split: validation
path: Ara--cardiffnlp--tweet_sentiment_multilingual/validation-*
- split: test
path: Ara--cardiffnlp--tweet_sentiment_multilingual/test-*
- config_name: Ara--miracl--miracl
data_files:
- split: train
path: Ara--miracl--miracl/train-*
- config_name: Ara--mustapha--QuranExe
data_files:
- split: train
path: Ara--mustapha--QuranExe/train-*
- config_name: Ara--pain--Arabic-Tweets
data_files:
- split: train
path: Ara--pain--Arabic-Tweets/train-*
- config_name: Ara--saudinewsnet
data_files:
- split: train
path: Ara--saudinewsnet/train-*
- config_name: Ary--AbderrahmanSkiredj1--Darija-Wikipedia
data_files:
- split: train
path: Ary--AbderrahmanSkiredj1--Darija-Wikipedia/train-*
- config_name: Ary--Ali-C137--Darija-Stories-Dataset
data_files:
- split: train
path: Ary--Ali-C137--Darija-Stories-Dataset/train-*
- config_name: Ary--Wikipedia
data_files:
- split: train
path: Ary--Wikipedia/train-*
- config_name: Arz--Wikipedia
data_files:
- split: train
path: Arz--Wikipedia/train-*
---
# Dataset Card for "Mixed Arabic Datasets (MAD) Corpus"
**The Mixed Arabic Datasets Corpus : A Community-Driven Collection of Diverse Arabic Texts**
## Dataset Description
The Mixed Arabic Datasets (MAD) presents a dynamic compilation of diverse Arabic texts sourced from various online platforms and datasets. It addresses a critical challenge faced by researchers, linguists, and language enthusiasts: the fragmentation of Arabic language datasets across the Internet. With MAD, we are trying to centralize these dispersed resources into a single, comprehensive repository.
Encompassing a wide spectrum of content, ranging from social media conversations to literary masterpieces, MAD captures the rich tapestry of Arabic communication, including both standard Arabic and regional dialects.
This corpus offers comprehensive insights into the linguistic diversity and cultural nuances of Arabic expression.
## Usage
If you want to use this dataset you pick one among the available configs:
`Ara--MBZUAI--Bactrian-X` | `Ara--OpenAssistant--oasst1` | `Ary--AbderrahmanSkiredj1--Darija-Wikipedia`
`Ara--Wikipedia` | `Ary--Wikipedia` | `Arz--Wikipedia`
`Ary--Ali-C137--Darija-Stories-Dataset` | `Ara--Ali-C137--Hindawi-Books-dataset` | ``
Example of usage:
```python
dataset = load_dataset('M-A-D/Mixed-Arabic-Datasets-Repo', 'Ara--MBZUAI--Bactrian-X')
```
If you loaded multiple datasets and wanted to merge them together then you can simply laverage `concatenate_datasets()` from `datasets`
```pyhton
dataset3 = concatenate_datasets([dataset1['train'], dataset2['train']])
```
Note : proccess the datasets before merging in order to make sure you have a new dataset that is consistent
## Dataset Size
The Mixed Arabic Datasets (MAD) is a dynamic and evolving collection, with its size fluctuating as new datasets are added or removed. As MAD continuously expands, it becomes a living resource that adapts to the ever-changing landscape of Arabic language datasets.
**Dataset List**
MAD draws from a diverse array of sources, each contributing to its richness and breadth. While the collection is constantly evolving, some of the datasets that are poised to join MAD in the near future include:
- [✔] OpenAssistant/oasst1 (ar portion) : [Dataset Link](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [✔] MBZUAI/Bactrian-X (ar portion) : [Dataset Link](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ar/train)
- [✔] AbderrahmanSkiredj1/Darija-Wikipedia : [Dataset Link](https://huggingface.co/datasets/AbderrahmanSkiredj1/moroccan_darija_wikipedia_dataset)
- [✔] Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
- [✔] Moroccan Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
- [✔] Egyptian Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
- [✔] Darija Stories Dataset : [Dataset Link](https://huggingface.co/datasets/Ali-C137/Darija-Stories-Dataset)
- [✔] Hindawi Books Dataset : [Dataset Link](https://huggingface.co/datasets/Ali-C137/Hindawi-Books-dataset)
- [] uonlp/CulturaX - ar : [Dataset Link](https://huggingface.co/datasets/uonlp/CulturaX/viewer/ar/train)
- [✔] Pain/ArabicTweets : [Dataset Link](https://huggingface.co/datasets/pain/Arabic-Tweets)
- [] Abu-El-Khair Corpus : [Dataset Link](https://huggingface.co/datasets/arabic_billion_words)
- [✔] QuranExe : [Dataset Link](https://huggingface.co/datasets/mustapha/QuranExe)
- [✔] MNAD : [Dataset Link](https://huggingface.co/datasets/J-Mourad/MNAD.v1)
- [✔] IADD : [Dataset Link](https://raw.githubusercontent.com/JihadZa/IADD/main/IADD.json)
- [] OSIAN : [Dataset Link](https://wortschatz.uni-leipzig.de/en/download/Arabic#ara-tn_newscrawl-OSIAN_2018)
- [✔] MAC corpus : [Dataset Link](https://raw.githubusercontent.com/LeMGarouani/MAC/main/MAC%20corpus.csv)
- [✔] Goud.ma-Sum : [Dataset Link](https://huggingface.co/datasets/Goud/Goud-sum)
- [✔] SaudiNewsNet : [Dataset Link](https://huggingface.co/datasets/saudinewsnet)
- [✔] Miracl : [Dataset Link](https://huggingface.co/datasets/miracl/miracl)
- [✔] CardiffNLP/TweetSentimentMulti : [Dataset Link](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)
- [] OSCAR-2301 : [Dataset Link](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301/viewer/ar/train)
- [] mc4 : [Dataset Link](https://huggingface.co/datasets/mc4/viewer/ar/train)
- [✔] bigscience/xP3 : [Dataset Link](https://huggingface.co/datasets/bigscience/xP3/viewer/ar/train)
- [] Muennighoff/xP3x : [Dataset Link](https://huggingface.co/datasets/Muennighoff/xP3x)
- [] Ai_Society : [Dataset Link](https://huggingface.co/datasets/camel-ai/ai_society_translated)
## Potential Use Cases
The Mixed Arabic Datasets (MAD) holds the potential to catalyze a multitude of groundbreaking applications:
- **Linguistic Analysis:** Employ MAD to conduct in-depth linguistic studies, exploring dialectal variances, language evolution, and grammatical structures.
- **Topic Modeling:** Dive into diverse themes and subjects through the extensive collection, revealing insights into emerging trends and prevalent topics.
- **Sentiment Understanding:** Decode sentiments spanning Arabic dialects, revealing cultural nuances and emotional dynamics.
- **Sociocultural Research:** Embark on a sociolinguistic journey, unraveling the intricate connection between language, culture, and societal shifts.
## Dataset Access
MAD's access mechanism is unique: while it doesn't carry a general license itself, each constituent dataset within the corpus retains its individual license. By accessing the dataset details through the provided links in the "Dataset List" section above, users can understand the specific licensing terms for each dataset.
### Join Us on Discord
For discussions, contributions, and community interactions, join us on Discord! [](https://discord.gg/2NpJ9JGm)
### How to Contribute
Want to contribute to the Mixed Arabic Datasets project? Follow our comprehensive guide on Google Colab for step-by-step instructions: [Contribution Guide](https://colab.research.google.com/drive/1kOIRoicgCOV8TPvASAI_2uMY7rpXnqzJ?usp=sharing).
**Note**: If you'd like to test a contribution before submitting it, feel free to do so on the [MAD Test Dataset](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Dataset-test).
## Citation
```
@dataset{
title = {Mixed Arabic Datasets (MAD)},
author = {MAD Community},
howpublished = {Dataset},
url = {https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo},
year = {2023},
}
``` |
bigscience/xP3all | bigscience | "2023-05-30T15:51:40Z" | 11,639 | 28 | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2211.01786",
"region:us"
] | [
"other"
] | "2022-07-30T21:05:02Z" | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.33|
|bm|107056|0.11|265180|0.33|
|ak|108096|0.11|265071|0.33|
|ca|110608|0.11|271191|0.33|
|eu|113008|0.11|281199|0.35|
|fon|113072|0.11|265063|0.33|
|st|114080|0.11|265063|0.33|
|ki|115040|0.12|265180|0.33|
|tum|116032|0.12|265063|0.33|
|wo|122560|0.12|365063|0.45|
|ln|126304|0.13|365060|0.45|
|as|156256|0.16|265063|0.33|
|or|161472|0.16|265063|0.33|
|kn|165456|0.17|265063|0.33|
|ml|175040|0.18|265864|0.33|
|rn|192992|0.19|318189|0.39|
|nso|229712|0.23|915051|1.13|
|tn|235536|0.24|915054|1.13|
|lg|235936|0.24|915021|1.13|
|rw|249360|0.25|915043|1.13|
|ts|250256|0.25|915044|1.13|
|sn|252496|0.25|865056|1.07|
|xh|254672|0.26|915058|1.13|
|zu|263712|0.26|915061|1.13|
|ny|272128|0.27|915063|1.13|
|ig|325232|0.33|950097|1.17|
|yo|352784|0.35|918416|1.13|
|ne|393680|0.39|315754|0.39|
|pa|523248|0.52|339210|0.42|
|gu|560688|0.56|347499|0.43|
|sw|566656|0.57|1130481|1.4|
|mr|666240|0.67|417269|0.52|
|bn|832720|0.83|428843|0.53|
|ta|926912|0.93|415433|0.51|
|te|1343232|1.35|584590|0.72|
|ur|1918272|1.92|855756|1.06|
|vi|3102512|3.11|1672106|2.07|
|code|4330752|4.34|2707724|3.34|
|hi|4403568|4.41|1554667|1.92|
|zh|4599440|4.61|3589234|4.43|
|id|4612256|4.62|2643418|3.27|
|ar|4683456|4.69|2160181|2.67|
|fr|6591120|6.6|5316403|6.57|
|pt|6886800|6.9|3752156|4.63|
|es|8587920|8.6|5413205|6.69|
|en|39252528|39.33|32740750|40.44|
|total|99807184|100.0|80956089|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval)
- Natural Language Inference
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Additional [xP3all](https://huggingface.co/datasets/bigscience/xP3all) datasets
- Coreference Resolution
- [WSC (Fixed)](https://huggingface.co/datasets/super_glue)
- Sentence Completion
- [HellaSwag](https://huggingface.co/datasets/hellaswag)
- Translation
- [MultiEurlex](https://huggingface.co/datasets/multi_eurlex)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
AISE-TUDelft/MSR_Intermediate | AISE-TUDelft | "2025-02-18T16:10:48Z" | 11,630 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-31T11:21:58Z" | ---
dataset_info:
- config_name: ANTLRExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 7557410
num_examples: 541
download_size: 2707259
dataset_size: 7557410
- config_name: AdaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 578367556
num_examples: 35425
download_size: 110673452
dataset_size: 578367556
- config_name: AdaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 578655182
num_examples: 35425
download_size: 111025773
dataset_size: 578655182
- config_name: AgdaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 38226393
num_examples: 5113
download_size: 14182143
dataset_size: 38226393
- config_name: AgdaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 38267937
num_examples: 5113
download_size: 14217347
dataset_size: 38267937
- config_name: AntlrNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 7561706
num_examples: 541
download_size: 2724032
dataset_size: 7561706
- config_name: ApexExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 24569165
num_examples: 7641
download_size: 6353866
dataset_size: 24569165
- config_name: ApexNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 24631233
num_examples: 7641
download_size: 6368630
dataset_size: 24631233
- config_name: AssemblyExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 2053432940
num_examples: 104901
download_size: 547495918
dataset_size: 2053432940
- config_name: AssemblyNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 2054324591
num_examples: 104901
download_size: 549503862
dataset_size: 2054324591
- config_name: C#Exact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 26661602730
num_examples: 3770829
download_size: 6588906272
dataset_size: 26661602730
- config_name: C#Near
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 26663488268
num_examples: 3770829
download_size: 6603075859
dataset_size: 26663488268
- config_name: CExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 89736233404
num_examples: 4960192
download_size: 28128090840
dataset_size: 89736233404
- config_name: CNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 89738714139
num_examples: 4960192
download_size: 28299862901
dataset_size: 89738714139
- config_name: COBOLExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 6629054
num_examples: 1208
download_size: 1750557
dataset_size: 6629054
- config_name: CPP2Near
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 58160863267
num_examples: 4811620
download_size: 17129813603
dataset_size: 58160863267
- config_name: CPPExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 58160261610
num_examples: 4811620
download_size: 17076690695
dataset_size: 58160261610
- config_name: CPPNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 58162667758
num_examples: 4811620
download_size: 17132623057
dataset_size: 58162667758
- config_name: ClojureExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 1421032074
num_examples: 273181
download_size: 459309399
dataset_size: 1421032074
- config_name: ClojureNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 1421066089
num_examples: 273181
download_size: 460645609
dataset_size: 1421066089
- config_name: CobolNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 6629000
num_examples: 1208
download_size: 1733668
dataset_size: 6629000
- config_name: CommonLispExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 296677534
num_examples: 16968
download_size: 109149148
dataset_size: 296677534
- config_name: CommonLispNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 296679380
num_examples: 16968
download_size: 110407258
dataset_size: 296679380
- config_name: CoqExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 632649581
num_examples: 26175
download_size: 189961246
dataset_size: 632649581
- config_name: CoqNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 632652578
num_examples: 26175
download_size: 190833648
dataset_size: 632652578
- config_name: CrystalExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 31335756
num_examples: 7300
download_size: 10366475
dataset_size: 31335756
- config_name: CrystalNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 31336434
num_examples: 7300
download_size: 10379390
dataset_size: 31336434
- config_name: CudaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 334592336
num_examples: 13359
download_size: 102491703
dataset_size: 334592336
- config_name: CudaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 334593801
num_examples: 13359
download_size: 102875919
dataset_size: 334593801
- config_name: DExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 3255384976
num_examples: 126111
download_size: 1129728566
dataset_size: 3255384976
- config_name: DNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 3255400520
num_examples: 126111
download_size: 1135463467
dataset_size: 3255400520
- config_name: DartExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 2329045207
num_examples: 413203
download_size: 669869628
dataset_size: 2329045207
- config_name: DartNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 2329096793
num_examples: 413203
download_size: 670901970
dataset_size: 2329096793
- config_name: EJSExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 71531393
num_examples: 12884
download_size: 21195866
dataset_size: 71531393
- config_name: EjsNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 71635864
num_examples: 12884
download_size: 21210665
dataset_size: 71635864
- config_name: ElixirExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 841135120
num_examples: 127910
download_size: 298160239
dataset_size: 841135120
- config_name: ElixirNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 841151060
num_examples: 127910
download_size: 298816538
dataset_size: 841151060
- config_name: ElmExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 8383
num_examples: 7
download_size: 27695
dataset_size: 8383
- config_name: ElmNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 8354
num_examples: 7
download_size: 22185
dataset_size: 8354
- config_name: EmacsLispExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 250101478
num_examples: 7963
download_size: 86051810
dataset_size: 250101478
- config_name: EmacsLispNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 250102362
num_examples: 7963
download_size: 86437277
dataset_size: 250102362
- config_name: ErlangExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 372175368
num_examples: 32049
download_size: 110494347
dataset_size: 372175368
- config_name: ErlangNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 372179250
num_examples: 32049
download_size: 110899584
dataset_size: 372179250
- config_name: F#Exact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 192500693
num_examples: 16015
download_size: 47297899
dataset_size: 192500693
- config_name: F#Near
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 192502511
num_examples: 16015
download_size: 47470253
dataset_size: 192502511
- config_name: ForthExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 347106780
num_examples: 7932
download_size: 144504016
dataset_size: 347106780
- config_name: ForthNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 347107612
num_examples: 7932
download_size: 146797290
dataset_size: 347107612
- config_name: FortranExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 2847566
num_examples: 63
download_size: 1054373
dataset_size: 2847566
- config_name: FortranNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 2847315
num_examples: 63
download_size: 1062081
dataset_size: 2847315
- config_name: GoExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 26167202808
num_examples: 2355716
download_size: 8138108314
dataset_size: 26167202808
- config_name: GoNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 26168086245
num_examples: 2355716
download_size: 8174167267
dataset_size: 26168086245
- config_name: GraphQLExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 38263
num_examples: 3
download_size: 36182
dataset_size: 38263
- config_name: GraphQLNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 38254
num_examples: 3
download_size: 32912
dataset_size: 38254
- config_name: GroovyExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 248453777
num_examples: 48353
download_size: 78401132
dataset_size: 248453777
- config_name: GroovyNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 248459647
num_examples: 48353
download_size: 78630814
dataset_size: 248459647
- config_name: HackExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 380628718
num_examples: 37405
download_size: 128232166
dataset_size: 380628718
- config_name: HackNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 380633269
num_examples: 37405
download_size: 128649687
dataset_size: 380633269
- config_name: HaskellExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 689851801
num_examples: 111234
download_size: 236120258
dataset_size: 689851801
- config_name: HaskellNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 689893289
num_examples: 111234
download_size: 236739420
dataset_size: 689893289
- config_name: HaskellNearT
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 689893289
num_examples: 111234
download_size: 236739420
dataset_size: 689893289
- config_name: HaskellTest
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: near_duplicates_redpajama
dtype: bool
splits:
- name: train
num_bytes: 689865477
num_examples: 111234
download_size: 236693079
dataset_size: 689865477
- config_name: HaskellTest2
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: near_duplicates_ghcode
dtype: bool
splits:
- name: train
num_bytes: 689865477
num_examples: 111234
download_size: 236695867
dataset_size: 689865477
- config_name: JavaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 32486202146
num_examples: 5197338
download_size: 8535677041
dataset_size: 32486202146
- config_name: JavaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 32488151167
num_examples: 5197338
download_size: 8542985524
dataset_size: 32488151167
- config_name: JavaNearF
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_duplicates_githubcode
dtype: bool
splits:
- name: train
num_bytes: 32488800842
num_examples: 5197338
download_size: 8543979432
dataset_size: 32488800842
- config_name: JavaScriptExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 88087310969
num_examples: 3393747
download_size: 28914572193
dataset_size: 88087310969
- config_name: JavaScriptNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 88089008184
num_examples: 3393747
download_size: 29083319680
dataset_size: 88089008184
- config_name: JuliaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 967638517
num_examples: 38381
download_size: 246231934
dataset_size: 967638517
- config_name: JuliaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 967652903
num_examples: 38381
download_size: 247077270
dataset_size: 967652903
- config_name: JupyterNotebookExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 11722076020
num_examples: 35313
download_size: 9067703543
dataset_size: 11722076020
- config_name: KotlinExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 3812037093
num_examples: 1045396
download_size: 1110654794
dataset_size: 3812037093
- config_name: KotlinNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 3812167735
num_examples: 1045396
download_size: 1110429592
dataset_size: 3812167735
- config_name: LessExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 95845785
num_examples: 7389
download_size: 26480395
dataset_size: 95845785
- config_name: LessNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 95846529
num_examples: 7389
download_size: 26477572
dataset_size: 95846529
- config_name: LuaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 8353646445
num_examples: 913898
download_size: 2505145950
dataset_size: 8353646445
- config_name: LuaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 8353989182
num_examples: 913898
download_size: 2515603988
dataset_size: 8353989182
- config_name: MathematicaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 7321356594
num_examples: 89853
download_size: 3584669375
dataset_size: 7321356594
- config_name: MathematicaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_duplicates_redpajama
dtype: bool
splits:
- name: train
num_bytes: 7321378962
num_examples: 89853
download_size: 3602914923
dataset_size: 7321378962
- config_name: MatlabExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 6903374516
num_examples: 665659
download_size: 2399794447
dataset_size: 6903374516
- config_name: MatlabNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 6903540783
num_examples: 665659
download_size: 2414346658
dataset_size: 6903540783
- config_name: NetLogoExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 41827450
num_examples: 863
download_size: 11620917
dataset_size: 41827450
- config_name: NetLogoNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 41827458
num_examples: 863
download_size: 11679805
dataset_size: 41827458
- config_name: NewLispExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 52918884
num_examples: 5148
download_size: 14039770
dataset_size: 52918884
- config_name: NewLispNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 52918764
num_examples: 5148
download_size: 14074385
dataset_size: 52918764
- config_name: NixExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 203855711
num_examples: 71199
download_size: 78575477
dataset_size: 203855711
- config_name: NixNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 203864476
num_examples: 71199
download_size: 78726489
dataset_size: 203864476
- config_name: OCamlExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 648064551
num_examples: 69171
download_size: 222300297
dataset_size: 648064551
- config_name: OCamlNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 648072984
num_examples: 69171
download_size: 222952991
dataset_size: 648072984
- config_name: Objective-CExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 9602633568
num_examples: 698137
download_size: 3703274717
dataset_size: 9602633568
- config_name: Objective-CNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 9602720799
num_examples: 698137
download_size: 3719903322
dataset_size: 9602720799
- config_name: PHPExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 25438710903
num_examples: 3363040
download_size: 7613380934
dataset_size: 25438710903
- config_name: PHPNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 25440392419
num_examples: 3363040
download_size: 7635635671
dataset_size: 25440392419
- config_name: PascalExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 7655488388
num_examples: 225749
download_size: 2498908413
dataset_size: 7655488388
- config_name: PascalNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 7655516624
num_examples: 225749
download_size: 2517922393
dataset_size: 7655516624
- config_name: PerlExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 16870280664
num_examples: 629769
download_size: 5734951211
dataset_size: 16870280664
- config_name: PerlNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 16870516978
num_examples: 629769
download_size: 5771999455
dataset_size: 16870516978
- config_name: ProcessingExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 84096293
num_examples: 20343
download_size: 29270300
dataset_size: 84096293
- config_name: ProcessingNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 84098751
num_examples: 20343
download_size: 29246387
dataset_size: 84098751
- config_name: PrologExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 691824350
num_examples: 20279
download_size: 191072651
dataset_size: 691824350
- config_name: PrologNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 691829302
num_examples: 20279
download_size: 192117293
dataset_size: 691829302
- config_name: PythonExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_CodeParrot
dtype: bool
splits:
- name: train
num_bytes: 25545914243
num_examples: 1792451
download_size: 10130671538
dataset_size: 25545914243
- config_name: PythonNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_codeparrot
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 25546586522
num_examples: 1792451
download_size: 10170421542
dataset_size: 25546586522
- config_name: PythonParrot
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_CodeParrot
dtype: bool
- name: near_duplicates_codeparrot
dtype: bool
splits:
- name: train
num_bytes: 25546138378
num_examples: 1792451
download_size: 10169529284
dataset_size: 25546138378
- config_name: PythonTest
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_CodeParrot
dtype: bool
- name: near_duplicates_redpajama
dtype: bool
splits:
- name: train
num_bytes: 25546138386
num_examples: 1792451
download_size: 10169495473
dataset_size: 25546138386
- config_name: RExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 21442634265
num_examples: 374812
download_size: 8600403423
dataset_size: 21442634265
- config_name: RNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 21445679622
num_examples: 374812
download_size: 8727132044
dataset_size: 21445679622
- config_name: RakuExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 4553176
num_examples: 1299
download_size: 1377473
dataset_size: 4553176
- config_name: RakuNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 4553060
num_examples: 1299
download_size: 1372440
dataset_size: 4553060
- config_name: RubyExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 2981769330
num_examples: 794364
download_size: 1009215918
dataset_size: 2981769330
- config_name: RubyNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 2982067120
num_examples: 794364
download_size: 1010741791
dataset_size: 2982067120
- config_name: RustExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 8834624371
num_examples: 844258
download_size: 2619167582
dataset_size: 8834624371
- config_name: RustNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 8834835442
num_examples: 844258
download_size: 2628770077
dataset_size: 8834835442
- config_name: SQLExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 392804973
num_examples: 41178
download_size: 87660816
dataset_size: 392804973
- config_name: SQLNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 392820273
num_examples: 41178
download_size: 87888246
dataset_size: 392820273
- config_name: ScalaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 1121907877
num_examples: 224021
download_size: 357412683
dataset_size: 1121907877
- config_name: ScalaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 1121963752
num_examples: 224021
download_size: 358048356
dataset_size: 1121963752
- config_name: SchemeExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 1666453613
num_examples: 54226
download_size: 609833105
dataset_size: 1666453613
- config_name: SchemeNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 1666453595
num_examples: 54226
download_size: 615428052
dataset_size: 1666453595
- config_name: ScilabExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 40724888
num_examples: 4084
download_size: 19426798
dataset_size: 40724888
- config_name: ScilabNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 40724883
num_examples: 4084
download_size: 19424804
dataset_size: 40724883
- config_name: StarlarkExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 3457635
num_examples: 498
download_size: 1347364
dataset_size: 3457635
- config_name: StarlarkNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 3457631
num_examples: 498
download_size: 1352131
dataset_size: 3457631
- config_name: SwiftExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 2713542331
num_examples: 439565
download_size: 854140622
dataset_size: 2713542331
- config_name: SwiftNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 2713542195
num_examples: 439565
download_size: 855986444
dataset_size: 2713542195
- config_name: TurtleExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 3442516
num_examples: 17
download_size: 799325
dataset_size: 3442516
- config_name: TypeScriptExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 14176972339
num_examples: 2837126
download_size: 4433625232
dataset_size: 14176972339
- config_name: VueExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 2137815900
num_examples: 323672
download_size: 674476397
dataset_size: 2137815900
- config_name: VueNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 2137815643
num_examples: 323672
download_size: 676642096
dataset_size: 2137815643
- config_name: WebAssemblyExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 120184637
num_examples: 585
download_size: 39377515
dataset_size: 120184637
- config_name: WebAssemblyNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 120184495
num_examples: 585
download_size: 39587423
dataset_size: 120184495
configs:
- config_name: ANTLRExact
data_files:
- split: train
path: data/ANTLR_Exact/train-*
- config_name: AdaExact
data_files:
- split: train
path: data/Ada_Exact/train-*
- config_name: AdaNear
data_files:
- split: train
path: data/Ada_Near/train-*
- config_name: AgdaExact
data_files:
- split: train
path: data/Agda_Exact/train-*
- config_name: AgdaNear
data_files:
- split: train
path: data/Agda_Near/train-*
- config_name: AntlrNear
data_files:
- split: train
path: data/Antlr_Near/train-*
- config_name: ApexExact
data_files:
- split: train
path: data/Apex_Exact/train-*
- config_name: ApexNear
data_files:
- split: train
path: data/Apex_Near/train-*
- config_name: AssemblyExact
data_files:
- split: train
path: data/Assembly_Exact/train-*
- config_name: AssemblyNear
data_files:
- split: train
path: data/Assembly_Near/train-*
- config_name: C#Exact
data_files:
- split: train
path: data/C#_Exact/train-*
- config_name: C#Near
data_files:
- split: train
path: data/C#_Near/train-*
- config_name: CExact
data_files:
- split: train
path: data/C_Exact/train-*
- config_name: CNear
data_files:
- split: train
path: data/C_Near/train-*
- config_name: COBOLExact
data_files:
- split: train
path: data/COBOL_Exact/train-*
- config_name: CPP2Near
data_files:
- split: train
path: data/CPP2_Near/train-*
- config_name: CPPExact
data_files:
- split: train
path: data/CPP_Exact/train-*
- config_name: CPPNear
data_files:
- split: train
path: data/CPP_Near/train-*
- config_name: ClojureExact
data_files:
- split: train
path: data/Clojure_Exact/train-*
- config_name: ClojureNear
data_files:
- split: train
path: data/Clojure_Near/train-*
- config_name: CobolNear
data_files:
- split: train
path: data/Cobol_Near/train-*
- config_name: CommonLispExact
data_files:
- split: train
path: data/CommonLisp_Exact/train-*
- config_name: CommonLispNear
data_files:
- split: train
path: data/CommonLisp_Near/train-*
- config_name: CoqExact
data_files:
- split: train
path: data/Coq_Exact/train-*
- config_name: CoqNear
data_files:
- split: train
path: data/Coq_Near/train-*
- config_name: CrystalExact
data_files:
- split: train
path: data/Crystal_Exact/train-*
- config_name: CrystalNear
data_files:
- split: train
path: data/Crystal_Near/train-*
- config_name: CudaExact
data_files:
- split: train
path: data/Cuda_Exact/train-*
- config_name: CudaNear
data_files:
- split: train
path: data/Cuda_Near/train-*
- config_name: DExact
data_files:
- split: train
path: data/D_Exact/train-*
- config_name: DNear
data_files:
- split: train
path: data/D_Near/train-*
- config_name: DartExact
data_files:
- split: train
path: data/Dart_Exact/train-*
- config_name: DartNear
data_files:
- split: train
path: data/Dart_Near/train-*
- config_name: EJSExact
data_files:
- split: train
path: data/EJS_Exact/train-*
- config_name: EjsNear
data_files:
- split: train
path: data/Ejs_Near/train-*
- config_name: ElixirExact
data_files:
- split: train
path: data/Elixir_Exact/train-*
- config_name: ElixirNear
data_files:
- split: train
path: data/Elixir_Near/train-*
- config_name: ElmExact
data_files:
- split: train
path: data/Elm_Exact/train-*
- config_name: ElmNear
data_files:
- split: train
path: data/Elm_Near/train-*
- config_name: EmacsLispExact
data_files:
- split: train
path: data/EmacsLisp_Exact/train-*
- config_name: EmacsLispNear
data_files:
- split: train
path: data/EmacsLisp_Near/train-*
- config_name: ErlangExact
data_files:
- split: train
path: data/Erlang_Exact/train-*
- config_name: ErlangNear
data_files:
- split: train
path: data/Erlang_Near/train-*
- config_name: F#Exact
data_files:
- split: train
path: data/F#_Exact/train-*
- config_name: F#Near
data_files:
- split: train
path: data/F#_Near/train-*
- config_name: ForthExact
data_files:
- split: train
path: data/Forth_Exact/train-*
- config_name: ForthNear
data_files:
- split: train
path: data/Forth_Near/train-*
- config_name: FortranExact
data_files:
- split: train
path: data/Fortran_Exact/train-*
- config_name: FortranNear
data_files:
- split: train
path: data/Fortran_Near/train-*
- config_name: GoExact
data_files:
- split: train
path: data/Go_Exact/train-*
- config_name: GoNear
data_files:
- split: train
path: data/Go_Near/train-*
- config_name: GraphQLExact
data_files:
- split: train
path: data/GraphQL_Exact/train-*
- config_name: GraphQLNear
data_files:
- split: train
path: data/GraphQL_Near/train-*
- config_name: GroovyExact
data_files:
- split: train
path: data/Groovy_Exact/train-*
- config_name: GroovyNear
data_files:
- split: train
path: data/Groovy_Near/train-*
- config_name: HackExact
data_files:
- split: train
path: data/Hack_Exact/train-*
- config_name: HackNear
data_files:
- split: train
path: data/Hack_Near/train-*
- config_name: HaskellExact
data_files:
- split: train
path: data/Haskell_Exact/train-*
- config_name: HaskellNear
data_files:
- split: train
path: data/Haskell_Near/train-*
- config_name: HaskellNearT
data_files:
- split: train
path: data/Haskell_NearT/train-*
- config_name: HaskellTest
data_files:
- split: train
path: data/Haskell_Test/train-*
- config_name: HaskellTest2
data_files:
- split: train
path: data/Haskell_Test2/train-*
- config_name: JavaExact
data_files:
- split: train
path: data/Java_Exact/train-*
- config_name: JavaNear
data_files:
- split: train
path: data/Java_Near/train-*
- config_name: JavaNearF
data_files:
- split: train
path: data/Java_NearF/train-*
- config_name: JavaScriptExact
data_files:
- split: train
path: data/JavaScript_Exact/train-*
- config_name: JavaScriptNear
data_files:
- split: train
path: data/JavaScript_Near/train-*
- config_name: JuliaExact
data_files:
- split: train
path: data/Julia_Exact/train-*
- config_name: JuliaNear
data_files:
- split: train
path: data/Julia_Near/train-*
- config_name: JupyterNotebookExact
data_files:
- split: train
path: data/JupyterNotebook_Exact/train-*
- config_name: KotlinExact
data_files:
- split: train
path: data/Kotlin_Exact/train-*
- config_name: KotlinNear
data_files:
- split: train
path: data/Kotlin_Near/train-*
- config_name: LessExact
data_files:
- split: train
path: data/Less_Exact/train-*
- config_name: LessNear
data_files:
- split: train
path: data/Less_Near/train-*
- config_name: LuaExact
data_files:
- split: train
path: data/Lua_Exact/train-*
- config_name: LuaNear
data_files:
- split: train
path: data/Lua_Near/train-*
- config_name: MathematicaExact
data_files:
- split: train
path: data/Mathematica_Exact/train-*
- config_name: MathematicaNear
data_files:
- split: train
path: data/Mathematica_Near/train-*
- config_name: MatlabExact
data_files:
- split: train
path: data/Matlab_Exact/train-*
- config_name: MatlabNear
data_files:
- split: train
path: data/Matlab_Near/train-*
- config_name: NetLogoExact
data_files:
- split: train
path: data/NetLogo_Exact/train-*
- config_name: NetLogoNear
data_files:
- split: train
path: data/NetLogo_Near/train-*
- config_name: NewLispExact
data_files:
- split: train
path: data/NewLisp_Exact/train-*
- config_name: NewLispNear
data_files:
- split: train
path: data/NewLisp_Near/train-*
- config_name: NixExact
data_files:
- split: train
path: data/Nix_Exact/train-*
- config_name: NixNear
data_files:
- split: train
path: data/Nix_Near/train-*
- config_name: OCamlExact
data_files:
- split: train
path: data/OCaml_Exact/train-*
- config_name: OCamlNear
data_files:
- split: train
path: data/OCaml_Near/train-*
- config_name: Objective-CExact
data_files:
- split: train
path: data/Objective-C_Exact/train-*
- config_name: Objective-CNear
data_files:
- split: train
path: data/Objective-C_Near/train-*
- config_name: PHPExact
data_files:
- split: train
path: data/PHP_Exact/train-*
- config_name: PHPNear
data_files:
- split: train
path: data/PHP_Near/train-*
- config_name: PascalExact
data_files:
- split: train
path: data/Pascal_Exact/train-*
- config_name: PascalNear
data_files:
- split: train
path: data/Pascal_Near/train-*
- config_name: PerlExact
data_files:
- split: train
path: data/Perl_Exact/train-*
- config_name: PerlNear
data_files:
- split: train
path: data/Perl_Near/train-*
- config_name: ProcessingExact
data_files:
- split: train
path: data/Processing_Exact/train-*
- config_name: ProcessingNear
data_files:
- split: train
path: data/Processing_Near/train-*
- config_name: PrologExact
data_files:
- split: train
path: data/Prolog_Exact/train-*
- config_name: PrologNear
data_files:
- split: train
path: data/Prolog_Near/train-*
- config_name: PythonExact
data_files:
- split: train
path: data/Python_Exact/train-*
- config_name: PythonNear
data_files:
- split: train
path: data/Python_Near/train-*
- config_name: PythonParrot
data_files:
- split: train
path: data/Python_Parrot/train-*
- config_name: PythonTest
data_files:
- split: train
path: data/Python_Test/train-*
- config_name: RExact
data_files:
- split: train
path: data/R_Exact/train-*
- config_name: RNear
data_files:
- split: train
path: data/R_Near/train-*
- config_name: RakuExact
data_files:
- split: train
path: data/Raku_Exact/train-*
- config_name: RakuNear
data_files:
- split: train
path: data/Raku_Near/train-*
- config_name: RubyExact
data_files:
- split: train
path: data/Ruby_Exact/train-*
- config_name: RubyNear
data_files:
- split: train
path: data/Ruby_Near/train-*
- config_name: RustExact
data_files:
- split: train
path: data/Rust_Exact/train-*
- config_name: RustNear
data_files:
- split: train
path: data/Rust_Near/train-*
- config_name: SQLExact
data_files:
- split: train
path: data/SQL_Exact/train-*
- config_name: SQLNear
data_files:
- split: train
path: data/SQL_Near/train-*
- config_name: ScalaExact
data_files:
- split: train
path: data/Scala_Exact/train-*
- config_name: ScalaNear
data_files:
- split: train
path: data/Scala_Near/train-*
- config_name: SchemeExact
data_files:
- split: train
path: data/Scheme_Exact/train-*
- config_name: SchemeNear
data_files:
- split: train
path: data/Scheme_Near/train-*
- config_name: ScilabExact
data_files:
- split: train
path: data/Scilab_Exact/train-*
- config_name: ScilabNear
data_files:
- split: train
path: data/Scilab_Near/train-*
- config_name: StarlarkExact
data_files:
- split: train
path: data/Starlark_Exact/train-*
- config_name: StarlarkNear
data_files:
- split: train
path: data/Starlark_Near/train-*
- config_name: SwiftExact
data_files:
- split: train
path: data/Swift_Exact/train-*
- config_name: SwiftNear
data_files:
- split: train
path: data/Swift_Near/train-*
- config_name: TurtleExact
data_files:
- split: train
path: data/Turtle_Exact/train-*
- config_name: TypeScriptExact
data_files:
- split: train
path: data/TypeScript_Exact/train-*
- config_name: VueExact
data_files:
- split: train
path: data/Vue_Exact/train-*
- config_name: VueNear
data_files:
- split: train
path: data/Vue_Near/train-*
- config_name: WebAssemblyExact
data_files:
- split: train
path: data/WebAssembly_Exact/train-*
- config_name: WebAssemblyNear
data_files:
- split: train
path: data/WebAssembly_Near/train-*
---
|
regent-project/regent-subset-of-jat-dataset-tokenized | regent-project | "2024-10-02T05:12:09Z" | 11,609 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-01T22:46:53Z" | ---
dataset_info:
- config_name: atari-alien_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 1905456
num_examples: 22684
download_size: 2088245
dataset_size: 1905456
- config_name: atari-amidar_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 11019541
dataset_size: 32810168
- config_name: atari-amidar_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23046343776
num_examples: 3142
download_size: 256637379
dataset_size: 23046343776
- config_name: atari-assault_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806232
num_examples: 100019
download_size: 14121737
dataset_size: 32806232
- config_name: atari-assault_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22972994496
num_examples: 3132
download_size: 186535975
dataset_size: 22972994496
- config_name: atari-asterix_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806560
num_examples: 100020
download_size: 11902934
dataset_size: 32806560
- config_name: atari-asterix_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23332405968
num_examples: 3181
download_size: 188517858
dataset_size: 23332405968
- config_name: atari-asteroids_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 202442660
dataset_size: 22936319856
- config_name: atari-atlantis_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32801640
num_examples: 100005
download_size: 13128838
dataset_size: 32801640
- config_name: atari-atlantis_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22943654784
num_examples: 3128
download_size: 206794180
dataset_size: 22943654784
- config_name: atari-bankheist_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806888
num_examples: 100021
download_size: 13754178
dataset_size: 32806888
- config_name: atari-bankheist_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23149032768
num_examples: 3156
download_size: 307236770
dataset_size: 23149032768
- config_name: atari-battlezone_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800984
num_examples: 100003
download_size: 15918969
dataset_size: 32800984
- config_name: atari-battlezone_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23002334208
num_examples: 3136
download_size: 247618279
dataset_size: 23002334208
- config_name: atari-beamrider_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806232
num_examples: 100019
download_size: 16063964
dataset_size: 32806232
- config_name: atari-beamrider_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 224067669
dataset_size: 22965659568
- config_name: atari-berzerk_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32803936
num_examples: 100012
download_size: 11678744
dataset_size: 32803936
- config_name: atari-berzerk_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 204431627
dataset_size: 22936319856
- config_name: atari-bowling_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32801968
num_examples: 100006
download_size: 7354865
dataset_size: 32801968
- config_name: atari-bowling_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23090353344
num_examples: 3148
download_size: 165124017
dataset_size: 23090353344
- config_name: atari-boxing_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32802296
num_examples: 100007
download_size: 11950572
dataset_size: 32802296
- config_name: atari-boxing_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23669812656
num_examples: 3227
download_size: 296234619
dataset_size: 23669812656
- config_name: atari-breakout_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32804592
num_examples: 100014
download_size: 4911820
dataset_size: 32804592
- config_name: atari-breakout_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22943654784
num_examples: 3128
download_size: 150562919
dataset_size: 22943654784
- config_name: atari-centipede_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805904
num_examples: 100018
download_size: 11285739
dataset_size: 32805904
- config_name: atari-centipede_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23295731328
num_examples: 3176
download_size: 185406529
dataset_size: 23295731328
- config_name: atari-choppercommand_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809840
num_examples: 100030
download_size: 14259234
dataset_size: 32809840
- config_name: atari-choppercommand_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23061013632
num_examples: 3144
download_size: 225019380
dataset_size: 23061013632
- config_name: atari-crazyclimber_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32804592
num_examples: 100014
download_size: 12305828
dataset_size: 32804592
- config_name: atari-crazyclimber_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22987664352
num_examples: 3134
download_size: 227557018
dataset_size: 22987664352
- config_name: atari-defender_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 10537157
dataset_size: 32807872
- config_name: atari-defender_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 172063588
dataset_size: 22936319856
- config_name: atari-demonattack_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 15551680
dataset_size: 32807872
- config_name: atari-demonattack_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 181049894
dataset_size: 22936319856
- config_name: atari-doubledunk_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32801968
num_examples: 100006
download_size: 11428550
dataset_size: 32801968
- config_name: atari-doubledunk_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23288396400
num_examples: 3175
download_size: 251707705
dataset_size: 23288396400
- config_name: atari-enduro_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32802296
num_examples: 100007
download_size: 12848229
dataset_size: 32802296
- config_name: atari-fishingderby_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13500648
dataset_size: 32800000
- config_name: atari-fishingderby_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23141697840
num_examples: 3155
download_size: 321501382
dataset_size: 23141697840
- config_name: atari-freeway_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 13676872
dataset_size: 32810168
- config_name: atari-freeway_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 280231420
dataset_size: 22965659568
- config_name: atari-frostbite_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806560
num_examples: 100020
download_size: 11934917
dataset_size: 32806560
- config_name: atari-frostbite_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23075683488
num_examples: 3146
download_size: 278638735
dataset_size: 23075683488
- config_name: atari-gopher_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809512
num_examples: 100029
download_size: 14334636
dataset_size: 32809512
- config_name: atari-gopher_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22943654784
num_examples: 3128
download_size: 196526681
dataset_size: 22943654784
- config_name: atari-gravitar_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805248
num_examples: 100016
download_size: 11576279
dataset_size: 32805248
- config_name: atari-gravitar_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23486439456
num_examples: 3202
download_size: 199543758
dataset_size: 23486439456
- config_name: atari-hero_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800984
num_examples: 100003
download_size: 12568260
dataset_size: 32800984
- config_name: atari-hero_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23061013632
num_examples: 3144
download_size: 231552624
dataset_size: 23061013632
- config_name: atari-icehockey_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800984
num_examples: 100003
download_size: 12259737
dataset_size: 32800984
- config_name: atari-icehockey_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23017004064
num_examples: 3138
download_size: 195362912
dataset_size: 23017004064
- config_name: atari-jamesbond_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 15590631
dataset_size: 32810168
- config_name: atari-jamesbond_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 239495464
dataset_size: 22965659568
- config_name: atari-kangaroo_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 12657496
dataset_size: 32807872
- config_name: atari-kangaroo_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23178372480
num_examples: 3160
download_size: 242035098
dataset_size: 23178372480
- config_name: atari-krull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808528
num_examples: 100026
download_size: 13793008
dataset_size: 32808528
- config_name: atari-krull_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23193042336
num_examples: 3162
download_size: 429983939
dataset_size: 23193042336
- config_name: atari-kungfumaster_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806232
num_examples: 100019
download_size: 14058554
dataset_size: 32806232
- config_name: atari-kungfumaster_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23053678704
num_examples: 3143
download_size: 298664084
dataset_size: 23053678704
- config_name: atari-montezumarevenge_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805904
num_examples: 100018
download_size: 12767695
dataset_size: 32805904
- config_name: atari-montezumarevenge_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23237051904
num_examples: 3168
download_size: 304131065
dataset_size: 23237051904
- config_name: atari-mspacman_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 1219680
num_examples: 14520
download_size: 1069909
dataset_size: 1219680
- config_name: atari-namethisgame_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800984
num_examples: 100003
download_size: 15146115
dataset_size: 32800984
- config_name: atari-namethisgame_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 257925381
dataset_size: 22965659568
- config_name: atari-phoenix_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808856
num_examples: 100027
download_size: 14775061
dataset_size: 32808856
- config_name: atari-phoenix_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 189670978
dataset_size: 22936319856
- config_name: atari-pitfall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 2022905
dataset_size: 32807872
- config_name: atari-pitfall_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22965659568
num_examples: 3131
download_size: 123924337
dataset_size: 22965659568
- config_name: atari-pong_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 697452
num_examples: 8303
download_size: 486008
dataset_size: 697452
- config_name: atari-privateeye_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806232
num_examples: 100019
download_size: 15683786
dataset_size: 32806232
- config_name: atari-privateeye_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23163702624
num_examples: 3158
download_size: 307264839
dataset_size: 23163702624
- config_name: atari-qbert_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805576
num_examples: 100017
download_size: 11451463
dataset_size: 32805576
- config_name: atari-qbert_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23002334208
num_examples: 3136
download_size: 285593415
dataset_size: 23002334208
- config_name: atari-riverraid_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806888
num_examples: 100021
download_size: 14223896
dataset_size: 32806888
- config_name: atari-riverraid_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23156367696
num_examples: 3157
download_size: 288584693
dataset_size: 23156367696
- config_name: atari-roadrunner_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809512
num_examples: 100029
download_size: 13280570
dataset_size: 32809512
- config_name: atari-roadrunner_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23105023200
num_examples: 3150
download_size: 224904364
dataset_size: 23105023200
- config_name: atari-robotank_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809512
num_examples: 100029
download_size: 13460396
dataset_size: 32809512
- config_name: atari-robotank_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22980329424
num_examples: 3133
download_size: 229314767
dataset_size: 22980329424
- config_name: atari-seaquest_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808528
num_examples: 100026
download_size: 14198049
dataset_size: 32808528
- config_name: atari-seaquest_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23017004064
num_examples: 3138
download_size: 213657303
dataset_size: 23017004064
- config_name: atari-skiing_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808856
num_examples: 100027
download_size: 12884548
dataset_size: 32808856
- config_name: atari-skiing_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23992549488
num_examples: 3271
download_size: 265395007
dataset_size: 23992549488
- config_name: atari-solaris_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32803936
num_examples: 100012
download_size: 10476310
dataset_size: 32803936
- config_name: atari-solaris_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22950989712
num_examples: 3129
download_size: 230256082
dataset_size: 22950989712
- config_name: atari-spaceinvaders_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 2686992
num_examples: 31988
download_size: 2636150
dataset_size: 2686992
- config_name: atari-stargunner_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 2684556
num_examples: 31959
download_size: 3498569
dataset_size: 2684556
- config_name: atari-surround_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809840
num_examples: 100030
download_size: 11413509
dataset_size: 32809840
- config_name: atari-surround_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23053678704
num_examples: 3143
download_size: 180554622
dataset_size: 23053678704
- config_name: atari-tennis_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32802952
num_examples: 100009
download_size: 5720988
dataset_size: 32802952
- config_name: atari-tennis_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22950989712
num_examples: 3129
download_size: 151319180
dataset_size: 22950989712
- config_name: atari-timepilot_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32809184
num_examples: 100028
download_size: 14178589
dataset_size: 32809184
- config_name: atari-timepilot_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22972994496
num_examples: 3132
download_size: 196752738
dataset_size: 22972994496
- config_name: atari-tutankham_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1848643
dataset_size: 32800000
- config_name: atari-tutankham_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 109029316
dataset_size: 22936319856
- config_name: atari-upndown_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808528
num_examples: 100026
download_size: 15582164
dataset_size: 32808528
- config_name: atari-upndown_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22936319856
num_examples: 3127
download_size: 482802952
dataset_size: 22936319856
- config_name: atari-venture_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11405983
dataset_size: 32800000
- config_name: atari-venture_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23090353344
num_examples: 3148
download_size: 217148669
dataset_size: 23090353344
- config_name: atari-videopinball_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 9499589
dataset_size: 32810168
- config_name: atari-videopinball_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22958324640
num_examples: 3130
download_size: 272326339
dataset_size: 22958324640
- config_name: atari-wizardofwor_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806560
num_examples: 100020
download_size: 12104199
dataset_size: 32806560
- config_name: atari-wizardofwor_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 23017004064
num_examples: 3138
download_size: 253042146
dataset_size: 23017004064
- config_name: atari-yarsrevenge_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32804264
num_examples: 100013
download_size: 10677319
dataset_size: 32804264
- config_name: atari-yarsrevenge_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22950989712
num_examples: 3129
download_size: 429404778
dataset_size: 22950989712
- config_name: atari-zaxxon_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805576
num_examples: 100017
download_size: 15293047
dataset_size: 32805576
- config_name: atari-zaxxon_subset
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float64
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
- name: embeddings_resnet18_512
sequence:
sequence: float32
splits:
- name: train
num_bytes: 22980329424
num_examples: 3133
download_size: 237964832
dataset_size: 22980329424
- config_name: babyai-action-obj-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32828208
num_examples: 100086
download_size: 6351769
dataset_size: 32828208
- config_name: babyai-action-obj-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3610820800
num_examples: 16400
download_size: 20957976
dataset_size: 3610820800
- config_name: babyai-blocked-unlock-pickup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32818696
num_examples: 100057
download_size: 6014080
dataset_size: 32818696
- config_name: babyai-blocked-unlock-pickup_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 642902240
num_examples: 2920
download_size: 3985069
dataset_size: 642902240
- config_name: babyai-boss-level-no-unlock_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33067976
num_examples: 100817
download_size: 7646179
dataset_size: 33067976
- config_name: babyai-boss-level-no-unlock_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 506395600
num_examples: 2300
download_size: 5341693
dataset_size: 506395600
- config_name: babyai-boss-level_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32803936
num_examples: 100012
download_size: 7644357
dataset_size: 32803936
- config_name: babyai-boss-level_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 467425156
num_examples: 2123
download_size: 5119669
dataset_size: 467425156
- config_name: babyai-find-obj-s5_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32830504
num_examples: 100093
download_size: 6001715
dataset_size: 32830504
- config_name: babyai-find-obj-s5_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 735374480
num_examples: 3340
download_size: 4382030
dataset_size: 735374480
- config_name: babyai-go-to-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805576
num_examples: 100017
download_size: 5127764
dataset_size: 32805576
- config_name: babyai-go-to-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4231705840
num_examples: 19220
download_size: 22688247
dataset_size: 4231705840
- config_name: babyai-go-to-imp-unlock_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33836152
num_examples: 103159
download_size: 7368269
dataset_size: 33836152
- config_name: babyai-go-to-imp-unlock_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 179220008
num_examples: 814
download_size: 3291631
dataset_size: 179220008
- config_name: babyai-go-to-local_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32815416
num_examples: 100047
download_size: 6587732
dataset_size: 32815416
- config_name: babyai-go-to-local_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4372615920
num_examples: 19860
download_size: 25582717
dataset_size: 4372615920
- config_name: babyai-go-to-obj-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32824600
num_examples: 100075
download_size: 6616557
dataset_size: 32824600
- config_name: babyai-go-to-obj-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3910254720
num_examples: 17760
download_size: 23384284
dataset_size: 3910254720
- config_name: babyai-go-to-obj_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32818040
num_examples: 100055
download_size: 4901201
dataset_size: 32818040
- config_name: babyai-go-to-obj_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4447474400
num_examples: 20200
download_size: 24576544
dataset_size: 4447474400
- config_name: babyai-go-to-red-ball-grey_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32812464
num_examples: 100038
download_size: 6490190
dataset_size: 32812464
- config_name: babyai-go-to-red-ball-grey_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3734117120
num_examples: 16960
download_size: 18354879
dataset_size: 3734117120
- config_name: babyai-go-to-red-ball-no-dists_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32825256
num_examples: 100077
download_size: 4153141
dataset_size: 32825256
- config_name: babyai-go-to-red-ball-no-dists_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4443070960
num_examples: 20180
download_size: 20210338
dataset_size: 4443070960
- config_name: babyai-go-to-red-ball_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32813120
num_examples: 100040
download_size: 6415108
dataset_size: 32813120
- config_name: babyai-go-to-red-ball_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4359405600
num_examples: 19800
download_size: 21065736
dataset_size: 4359405600
- config_name: babyai-go-to-red-blue-ball_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32820992
num_examples: 100064
download_size: 6442448
dataset_size: 32820992
- config_name: babyai-go-to-red-blue-ball_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3729713680
num_examples: 16940
download_size: 18512506
dataset_size: 3729713680
- config_name: babyai-go-to-seq_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33061088
num_examples: 100796
download_size: 7409942
dataset_size: 33061088
- config_name: babyai-go-to-seq_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 427133680
num_examples: 1940
download_size: 4522477
dataset_size: 427133680
- config_name: babyai-go-to_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33100120
num_examples: 100915
download_size: 6499380
dataset_size: 33100120
- config_name: babyai-go-to_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 405116480
num_examples: 1840
download_size: 4386063
dataset_size: 405116480
- config_name: babyai-key-corridor_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32812136
num_examples: 100037
download_size: 5495432
dataset_size: 32812136
- config_name: babyai-key-corridor_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 198154800
num_examples: 900
download_size: 2450613
dataset_size: 198154800
- config_name: babyai-mini-boss-level_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32861664
num_examples: 100188
download_size: 8146530
dataset_size: 32861664
- config_name: babyai-mini-boss-level_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1828968804
num_examples: 8307
download_size: 10435667
dataset_size: 1828968804
- config_name: babyai-move-two-across-s8n9_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32819680
num_examples: 100060
download_size: 6974780
dataset_size: 32819680
- config_name: babyai-move-two-across-s8n9_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 542944152
num_examples: 2466
download_size: 6570582
dataset_size: 542944152
- config_name: babyai-one-room-s8_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32810168
num_examples: 100031
download_size: 4984774
dataset_size: 32810168
- config_name: babyai-one-room-s8_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3742924000
num_examples: 17000
download_size: 17173321
dataset_size: 3742924000
- config_name: babyai-open-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32817056
num_examples: 100052
download_size: 5205819
dataset_size: 32817056
- config_name: babyai-open-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3038373600
num_examples: 13800
download_size: 17501487
dataset_size: 3038373600
- config_name: babyai-open-doors-order-n4_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32838376
num_examples: 100117
download_size: 6133031
dataset_size: 32838376
- config_name: babyai-open-doors-order-n4_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1836234480
num_examples: 8340
download_size: 11032382
dataset_size: 1836234480
- config_name: babyai-open-red-door_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32823616
num_examples: 100072
download_size: 1484381
dataset_size: 32823616
- config_name: babyai-open-red-door_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4667646400
num_examples: 21200
download_size: 16451040
dataset_size: 4667646400
- config_name: babyai-open-two-doors_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32854120
num_examples: 100165
download_size: 2596672
dataset_size: 32854120
- config_name: babyai-open-two-doors_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1620465920
num_examples: 7360
download_size: 9539342
dataset_size: 1620465920
- config_name: babyai-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33025664
num_examples: 100688
download_size: 5759900
dataset_size: 33025664
- config_name: babyai-open_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 581254080
num_examples: 2640
download_size: 5191396
dataset_size: 581254080
- config_name: babyai-pickup-above_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32801968
num_examples: 100006
download_size: 5403204
dataset_size: 32801968
- config_name: babyai-pickup-above_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 748584800
num_examples: 3400
download_size: 5541685
dataset_size: 748584800
- config_name: babyai-pickup-dist_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32802296
num_examples: 100007
download_size: 6291115
dataset_size: 32802296
- config_name: babyai-pickup-dist_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4108409520
num_examples: 18660
download_size: 22832605
dataset_size: 4108409520
- config_name: babyai-pickup-loc_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32828536
num_examples: 100087
download_size: 8150075
dataset_size: 32828536
- config_name: babyai-pickup-loc_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 3484221900
num_examples: 15825
download_size: 21470853
dataset_size: 3484221900
- config_name: babyai-pickup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32968264
num_examples: 100513
download_size: 6487579
dataset_size: 32968264
- config_name: babyai-pickup_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 374292400
num_examples: 1700
download_size: 4188562
dataset_size: 374292400
- config_name: babyai-put-next-local_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32846904
num_examples: 100143
download_size: 8568082
dataset_size: 32846904
- config_name: babyai-put-next-local_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1831831040
num_examples: 8320
download_size: 13012534
dataset_size: 1831831040
- config_name: babyai-put-next_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32900040
num_examples: 100305
download_size: 8673285
dataset_size: 32900040
- config_name: babyai-put-next_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1259383840
num_examples: 5720
download_size: 9667394
dataset_size: 1259383840
- config_name: babyai-synth-loc_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32908240
num_examples: 100330
download_size: 7667920
dataset_size: 32908240
- config_name: babyai-synth-loc_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 537219680
num_examples: 2440
download_size: 5545442
dataset_size: 537219680
- config_name: babyai-synth-seq_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33054528
num_examples: 100776
download_size: 7755136
dataset_size: 33054528
- config_name: babyai-synth-seq_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 568043760
num_examples: 2580
download_size: 5763605
dataset_size: 568043760
- config_name: babyai-synth_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32867896
num_examples: 100207
download_size: 7353038
dataset_size: 32867896
- config_name: babyai-synth_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 409519920
num_examples: 1860
download_size: 4378472
dataset_size: 409519920
- config_name: babyai-unblock-pickup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32953176
num_examples: 100467
download_size: 6630782
dataset_size: 32953176
- config_name: babyai-unblock-pickup_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 378916012
num_examples: 1721
download_size: 4242269
dataset_size: 378916012
- config_name: babyai-unlock-local_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32812464
num_examples: 100038
download_size: 5630652
dataset_size: 32812464
- config_name: babyai-unlock-local_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1567624640
num_examples: 7120
download_size: 8268704
dataset_size: 1567624640
- config_name: babyai-unlock-pickup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32897088
num_examples: 100296
download_size: 4544845
dataset_size: 32897088
- config_name: babyai-unlock-pickup_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 1127280640
num_examples: 5120
download_size: 6990282
dataset_size: 1127280640
- config_name: babyai-unlock-to-unlock_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32960064
num_examples: 100488
download_size: 5942465
dataset_size: 32960064
- config_name: babyai-unlock-to-unlock_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 510799040
num_examples: 2320
download_size: 3665802
dataset_size: 510799040
- config_name: babyai-unlock_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 33094872
num_examples: 100899
download_size: 6456229
dataset_size: 33094872
- config_name: babyai-unlock_subset
features:
- name: discrete_observations
sequence:
sequence: int32
- name: discrete_actions
sequence: int32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 287764804
num_examples: 1307
download_size: 4020028
dataset_size: 287764804
- config_name: metaworld-assembly_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1370386
dataset_size: 32800000
- config_name: metaworld-assembly_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 2494940
dataset_size: 47116000
- config_name: metaworld-basketball_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13190732
dataset_size: 32800000
- config_name: metaworld-basketball_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9208389
dataset_size: 47116000
- config_name: metaworld-bin-picking_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 840000
num_examples: 10000
download_size: 952363
dataset_size: 840000
- config_name: metaworld-box-close_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 840000
num_examples: 10000
download_size: 1058011
dataset_size: 840000
- config_name: metaworld-button-press-topdown-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12506477
dataset_size: 32800000
- config_name: metaworld-button-press-topdown-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6795055
dataset_size: 47116000
- config_name: metaworld-button-press-topdown_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12383341
dataset_size: 32800000
- config_name: metaworld-button-press-topdown_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6647074
dataset_size: 47116000
- config_name: metaworld-button-press-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11884670
dataset_size: 32800000
- config_name: metaworld-button-press-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6388048
dataset_size: 47116000
- config_name: metaworld-button-press_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12504036
dataset_size: 32800000
- config_name: metaworld-button-press_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6079174
dataset_size: 47116000
- config_name: metaworld-coffee-button_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11302073
dataset_size: 32800000
- config_name: metaworld-coffee-button_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6402919
dataset_size: 47116000
- config_name: metaworld-coffee-pull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13291438
dataset_size: 32800000
- config_name: metaworld-coffee-pull_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9165455
dataset_size: 47116000
- config_name: metaworld-coffee-push_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13347747
dataset_size: 32800000
- config_name: metaworld-coffee-push_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9819758
dataset_size: 47116000
- config_name: metaworld-dial-turn_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11453279
dataset_size: 32800000
- config_name: metaworld-dial-turn_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5840306
dataset_size: 47116000
- config_name: metaworld-disassemble_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 8574754
dataset_size: 32800000
- config_name: metaworld-disassemble_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 4082529
dataset_size: 47116000
- config_name: metaworld-door-close_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13743650
dataset_size: 32800000
- config_name: metaworld-door-close_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 8698806
dataset_size: 47116000
- config_name: metaworld-door-lock_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 840000
num_examples: 10000
download_size: 776743
dataset_size: 840000
- config_name: metaworld-door-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13781189
dataset_size: 32800000
- config_name: metaworld-door-open_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 7983276
dataset_size: 47116000
- config_name: metaworld-door-unlock_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 840000
num_examples: 10000
download_size: 829555
dataset_size: 840000
- config_name: metaworld-drawer-close_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13903693
dataset_size: 32800000
- config_name: metaworld-drawer-close_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5764071
dataset_size: 47116000
- config_name: metaworld-drawer-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12036502
dataset_size: 32800000
- config_name: metaworld-drawer-open_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5484434
dataset_size: 47116000
- config_name: metaworld-faucet-close_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 14148656
dataset_size: 32800000
- config_name: metaworld-faucet-close_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5086095
dataset_size: 47116000
- config_name: metaworld-faucet-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 14300852
dataset_size: 32800000
- config_name: metaworld-faucet-open_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5497182
dataset_size: 47116000
- config_name: metaworld-hammer_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13491757
dataset_size: 32800000
- config_name: metaworld-hammer_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 10062439
dataset_size: 47116000
- config_name: metaworld-handle-press-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12555014
dataset_size: 32800000
- config_name: metaworld-handle-press-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5880675
dataset_size: 47116000
- config_name: metaworld-handle-press_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13473313
dataset_size: 32800000
- config_name: metaworld-handle-press_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5879237
dataset_size: 47116000
- config_name: metaworld-handle-pull-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13576934
dataset_size: 32800000
- config_name: metaworld-handle-pull-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6737064
dataset_size: 47116000
- config_name: metaworld-handle-pull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12046278
dataset_size: 32800000
- config_name: metaworld-handle-pull_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6896646
dataset_size: 47116000
- config_name: metaworld-lever-pull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12827517
dataset_size: 32800000
- config_name: metaworld-lever-pull_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9568802
dataset_size: 47116000
- config_name: metaworld-peg-insert-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13057268
dataset_size: 32800000
- config_name: metaworld-peg-insert-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 8714100
dataset_size: 47116000
- config_name: metaworld-peg-unplug-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13163866
dataset_size: 32800000
- config_name: metaworld-peg-unplug-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9726674
dataset_size: 47116000
- config_name: metaworld-pick-out-of-hole_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1376243
dataset_size: 32800000
- config_name: metaworld-pick-out-of-hole_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 1419339
dataset_size: 47116000
- config_name: metaworld-pick-place-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13636756
dataset_size: 32800000
- config_name: metaworld-pick-place-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9760537
dataset_size: 47116000
- config_name: metaworld-pick-place_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13638935
dataset_size: 32800000
- config_name: metaworld-pick-place_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 10013159
dataset_size: 47116000
- config_name: metaworld-plate-slide-back-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1365777
dataset_size: 32800000
- config_name: metaworld-plate-slide-back-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 1936719
dataset_size: 47116000
- config_name: metaworld-plate-slide-back_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 1372778
dataset_size: 32800000
- config_name: metaworld-plate-slide-back_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 2568887
dataset_size: 47116000
- config_name: metaworld-plate-slide-side_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 9706526
dataset_size: 32800000
- config_name: metaworld-plate-slide-side_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6041762
dataset_size: 47116000
- config_name: metaworld-plate-slide_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 9787720
dataset_size: 32800000
- config_name: metaworld-plate-slide_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 6512808
dataset_size: 47116000
- config_name: metaworld-push-back_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 14075602
dataset_size: 32800000
- config_name: metaworld-push-back_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 7550247
dataset_size: 47116000
- config_name: metaworld-push-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13592428
dataset_size: 32800000
- config_name: metaworld-push-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 8970793
dataset_size: 47116000
- config_name: metaworld-push_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13341527
dataset_size: 32800000
- config_name: metaworld-push_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9427900
dataset_size: 47116000
- config_name: metaworld-reach-wall_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12733205
dataset_size: 32800000
- config_name: metaworld-reach-wall_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9731627
dataset_size: 47116000
- config_name: metaworld-reach_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12106144
dataset_size: 32800000
- config_name: metaworld-reach_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9563337
dataset_size: 47116000
- config_name: metaworld-shelf-place_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13046597
dataset_size: 32800000
- config_name: metaworld-shelf-place_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 8068065
dataset_size: 47116000
- config_name: metaworld-soccer_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 11954933
dataset_size: 32800000
- config_name: metaworld-soccer_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9009300
dataset_size: 47116000
- config_name: metaworld-stick-pull_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13346574
dataset_size: 32800000
- config_name: metaworld-stick-pull_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9654361
dataset_size: 47116000
- config_name: metaworld-stick-push_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13868467
dataset_size: 32800000
- config_name: metaworld-stick-push_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9420722
dataset_size: 47116000
- config_name: metaworld-sweep-into_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13471306
dataset_size: 32800000
- config_name: metaworld-sweep-into_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 7656262
dataset_size: 47116000
- config_name: metaworld-sweep_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13966344
dataset_size: 32800000
- config_name: metaworld-sweep_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 9333916
dataset_size: 47116000
- config_name: metaworld-window-close_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12562521
dataset_size: 32800000
- config_name: metaworld-window-close_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5405410
dataset_size: 47116000
- config_name: metaworld-window-open_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12270843
dataset_size: 32800000
- config_name: metaworld-window-open_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 47116000
num_examples: 1000
download_size: 5455606
dataset_size: 47116000
- config_name: mujoco-ant_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32847232
num_examples: 100144
download_size: 16107573
dataset_size: 32847232
- config_name: mujoco-ant_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 15608524
num_examples: 401
download_size: 16185601
dataset_size: 15608524
- config_name: mujoco-doublependulum_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32805248
num_examples: 100016
download_size: 16102270
dataset_size: 32805248
- config_name: mujoco-doublependulum_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 6164172
num_examples: 401
download_size: 4960978
dataset_size: 6164172
- config_name: mujoco-halfcheetah_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 8400000
num_examples: 100000
download_size: 11373374
dataset_size: 8400000
- config_name: mujoco-hopper_newdata
features:
- name: distances
sequence: float32
splits:
- name: train
num_bytes: 3834768
num_examples: 45652
download_size: 5110310
dataset_size: 3834768
- config_name: mujoco-humanoid_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32808200
num_examples: 100025
download_size: 16122991
dataset_size: 32808200
- config_name: mujoco-humanoid_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 168289140
num_examples: 415
download_size: 116298243
dataset_size: 168289140
- config_name: mujoco-pendulum_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32806888
num_examples: 100021
download_size: 15694433
dataset_size: 32806888
- config_name: mujoco-pendulum_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 4060980
num_examples: 495
download_size: 3083276
dataset_size: 4060980
- config_name: mujoco-pusher_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 13887459
dataset_size: 32800000
- config_name: mujoco-pusher_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 33804000
num_examples: 1000
download_size: 13463910
dataset_size: 33804000
- config_name: mujoco-reacher_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 12795397
dataset_size: 32800000
- config_name: mujoco-reacher_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 32792000
num_examples: 2000
download_size: 7687471
dataset_size: 32792000
- config_name: mujoco-standup_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 16032984
dataset_size: 32800000
- config_name: mujoco-standup_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 162206400
num_examples: 400
download_size: 117589700
dataset_size: 162206400
- config_name: mujoco-swimmer_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32800000
num_examples: 100000
download_size: 15858902
dataset_size: 32800000
- config_name: mujoco-swimmer_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 5329600
num_examples: 400
download_size: 5733100
dataset_size: 5329600
- config_name: mujoco-walker_newdata
features:
- name: distances
sequence: float32
- name: indices
sequence:
sequence: int32
splits:
- name: train
num_bytes: 32807872
num_examples: 100024
download_size: 15920611
dataset_size: 32807872
- config_name: mujoco-walker_subset
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
splits:
- name: train
num_bytes: 10840852
num_examples: 407
download_size: 11101553
dataset_size: 10840852
configs:
- config_name: atari-alien_newdata
data_files:
- split: train
path: atari-alien_newdata/train-*
- config_name: atari-amidar_newdata
data_files:
- split: train
path: atari-amidar_newdata/train-*
- config_name: atari-amidar_subset
data_files:
- split: train
path: atari-amidar_subset/train-*
- config_name: atari-assault_newdata
data_files:
- split: train
path: atari-assault_newdata/train-*
- config_name: atari-assault_subset
data_files:
- split: train
path: atari-assault_subset/train-*
- config_name: atari-asterix_newdata
data_files:
- split: train
path: atari-asterix_newdata/train-*
- config_name: atari-asterix_subset
data_files:
- split: train
path: atari-asterix_subset/train-*
- config_name: atari-asteroids_subset
data_files:
- split: train
path: atari-asteroids_subset/train-*
- config_name: atari-atlantis_newdata
data_files:
- split: train
path: atari-atlantis_newdata/train-*
- config_name: atari-atlantis_subset
data_files:
- split: train
path: atari-atlantis_subset/train-*
- config_name: atari-bankheist_newdata
data_files:
- split: train
path: atari-bankheist_newdata/train-*
- config_name: atari-bankheist_subset
data_files:
- split: train
path: atari-bankheist_subset/train-*
- config_name: atari-battlezone_newdata
data_files:
- split: train
path: atari-battlezone_newdata/train-*
- config_name: atari-battlezone_subset
data_files:
- split: train
path: atari-battlezone_subset/train-*
- config_name: atari-beamrider_newdata
data_files:
- split: train
path: atari-beamrider_newdata/train-*
- config_name: atari-beamrider_subset
data_files:
- split: train
path: atari-beamrider_subset/train-*
- config_name: atari-berzerk_newdata
data_files:
- split: train
path: atari-berzerk_newdata/train-*
- config_name: atari-berzerk_subset
data_files:
- split: train
path: atari-berzerk_subset/train-*
- config_name: atari-bowling_newdata
data_files:
- split: train
path: atari-bowling_newdata/train-*
- config_name: atari-bowling_subset
data_files:
- split: train
path: atari-bowling_subset/train-*
- config_name: atari-boxing_newdata
data_files:
- split: train
path: atari-boxing_newdata/train-*
- config_name: atari-boxing_subset
data_files:
- split: train
path: atari-boxing_subset/train-*
- config_name: atari-breakout_newdata
data_files:
- split: train
path: atari-breakout_newdata/train-*
- config_name: atari-breakout_subset
data_files:
- split: train
path: atari-breakout_subset/train-*
- config_name: atari-centipede_newdata
data_files:
- split: train
path: atari-centipede_newdata/train-*
- config_name: atari-centipede_subset
data_files:
- split: train
path: atari-centipede_subset/train-*
- config_name: atari-choppercommand_newdata
data_files:
- split: train
path: atari-choppercommand_newdata/train-*
- config_name: atari-choppercommand_subset
data_files:
- split: train
path: atari-choppercommand_subset/train-*
- config_name: atari-crazyclimber_newdata
data_files:
- split: train
path: atari-crazyclimber_newdata/train-*
- config_name: atari-crazyclimber_subset
data_files:
- split: train
path: atari-crazyclimber_subset/train-*
- config_name: atari-defender_newdata
data_files:
- split: train
path: atari-defender_newdata/train-*
- config_name: atari-defender_subset
data_files:
- split: train
path: atari-defender_subset/train-*
- config_name: atari-demonattack_newdata
data_files:
- split: train
path: atari-demonattack_newdata/train-*
- config_name: atari-demonattack_subset
data_files:
- split: train
path: atari-demonattack_subset/train-*
- config_name: atari-doubledunk_newdata
data_files:
- split: train
path: atari-doubledunk_newdata/train-*
- config_name: atari-doubledunk_subset
data_files:
- split: train
path: atari-doubledunk_subset/train-*
- config_name: atari-enduro_newdata
data_files:
- split: train
path: atari-enduro_newdata/train-*
- config_name: atari-fishingderby_newdata
data_files:
- split: train
path: atari-fishingderby_newdata/train-*
- config_name: atari-fishingderby_subset
data_files:
- split: train
path: atari-fishingderby_subset/train-*
- config_name: atari-freeway_newdata
data_files:
- split: train
path: atari-freeway_newdata/train-*
- config_name: atari-freeway_subset
data_files:
- split: train
path: atari-freeway_subset/train-*
- config_name: atari-frostbite_newdata
data_files:
- split: train
path: atari-frostbite_newdata/train-*
- config_name: atari-frostbite_subset
data_files:
- split: train
path: atari-frostbite_subset/train-*
- config_name: atari-gopher_newdata
data_files:
- split: train
path: atari-gopher_newdata/train-*
- config_name: atari-gopher_subset
data_files:
- split: train
path: atari-gopher_subset/train-*
- config_name: atari-gravitar_newdata
data_files:
- split: train
path: atari-gravitar_newdata/train-*
- config_name: atari-gravitar_subset
data_files:
- split: train
path: atari-gravitar_subset/train-*
- config_name: atari-hero_newdata
data_files:
- split: train
path: atari-hero_newdata/train-*
- config_name: atari-hero_subset
data_files:
- split: train
path: atari-hero_subset/train-*
- config_name: atari-icehockey_newdata
data_files:
- split: train
path: atari-icehockey_newdata/train-*
- config_name: atari-icehockey_subset
data_files:
- split: train
path: atari-icehockey_subset/train-*
- config_name: atari-jamesbond_newdata
data_files:
- split: train
path: atari-jamesbond_newdata/train-*
- config_name: atari-jamesbond_subset
data_files:
- split: train
path: atari-jamesbond_subset/train-*
- config_name: atari-kangaroo_newdata
data_files:
- split: train
path: atari-kangaroo_newdata/train-*
- config_name: atari-kangaroo_subset
data_files:
- split: train
path: atari-kangaroo_subset/train-*
- config_name: atari-krull_newdata
data_files:
- split: train
path: atari-krull_newdata/train-*
- config_name: atari-krull_subset
data_files:
- split: train
path: atari-krull_subset/train-*
- config_name: atari-kungfumaster_newdata
data_files:
- split: train
path: atari-kungfumaster_newdata/train-*
- config_name: atari-kungfumaster_subset
data_files:
- split: train
path: atari-kungfumaster_subset/train-*
- config_name: atari-montezumarevenge_newdata
data_files:
- split: train
path: atari-montezumarevenge_newdata/train-*
- config_name: atari-montezumarevenge_subset
data_files:
- split: train
path: atari-montezumarevenge_subset/train-*
- config_name: atari-mspacman_newdata
data_files:
- split: train
path: atari-mspacman_newdata/train-*
- config_name: atari-namethisgame_newdata
data_files:
- split: train
path: atari-namethisgame_newdata/train-*
- config_name: atari-namethisgame_subset
data_files:
- split: train
path: atari-namethisgame_subset/train-*
- config_name: atari-phoenix_newdata
data_files:
- split: train
path: atari-phoenix_newdata/train-*
- config_name: atari-phoenix_subset
data_files:
- split: train
path: atari-phoenix_subset/train-*
- config_name: atari-pitfall_newdata
data_files:
- split: train
path: atari-pitfall_newdata/train-*
- config_name: atari-pitfall_subset
data_files:
- split: train
path: atari-pitfall_subset/train-*
- config_name: atari-pong_newdata
data_files:
- split: train
path: atari-pong_newdata/train-*
- config_name: atari-privateeye_newdata
data_files:
- split: train
path: atari-privateeye_newdata/train-*
- config_name: atari-privateeye_subset
data_files:
- split: train
path: atari-privateeye_subset/train-*
- config_name: atari-qbert_newdata
data_files:
- split: train
path: atari-qbert_newdata/train-*
- config_name: atari-qbert_subset
data_files:
- split: train
path: atari-qbert_subset/train-*
- config_name: atari-riverraid_newdata
data_files:
- split: train
path: atari-riverraid_newdata/train-*
- config_name: atari-riverraid_subset
data_files:
- split: train
path: atari-riverraid_subset/train-*
- config_name: atari-roadrunner_newdata
data_files:
- split: train
path: atari-roadrunner_newdata/train-*
- config_name: atari-roadrunner_subset
data_files:
- split: train
path: atari-roadrunner_subset/train-*
- config_name: atari-robotank_newdata
data_files:
- split: train
path: atari-robotank_newdata/train-*
- config_name: atari-robotank_subset
data_files:
- split: train
path: atari-robotank_subset/train-*
- config_name: atari-seaquest_newdata
data_files:
- split: train
path: atari-seaquest_newdata/train-*
- config_name: atari-seaquest_subset
data_files:
- split: train
path: atari-seaquest_subset/train-*
- config_name: atari-skiing_newdata
data_files:
- split: train
path: atari-skiing_newdata/train-*
- config_name: atari-skiing_subset
data_files:
- split: train
path: atari-skiing_subset/train-*
- config_name: atari-solaris_newdata
data_files:
- split: train
path: atari-solaris_newdata/train-*
- config_name: atari-solaris_subset
data_files:
- split: train
path: atari-solaris_subset/train-*
- config_name: atari-spaceinvaders_newdata
data_files:
- split: train
path: atari-spaceinvaders_newdata/train-*
- config_name: atari-stargunner_newdata
data_files:
- split: train
path: atari-stargunner_newdata/train-*
- config_name: atari-surround_newdata
data_files:
- split: train
path: atari-surround_newdata/train-*
- config_name: atari-surround_subset
data_files:
- split: train
path: atari-surround_subset/train-*
- config_name: atari-tennis_newdata
data_files:
- split: train
path: atari-tennis_newdata/train-*
- config_name: atari-tennis_subset
data_files:
- split: train
path: atari-tennis_subset/train-*
- config_name: atari-timepilot_newdata
data_files:
- split: train
path: atari-timepilot_newdata/train-*
- config_name: atari-timepilot_subset
data_files:
- split: train
path: atari-timepilot_subset/train-*
- config_name: atari-tutankham_newdata
data_files:
- split: train
path: atari-tutankham_newdata/train-*
- config_name: atari-tutankham_subset
data_files:
- split: train
path: atari-tutankham_subset/train-*
- config_name: atari-upndown_newdata
data_files:
- split: train
path: atari-upndown_newdata/train-*
- config_name: atari-upndown_subset
data_files:
- split: train
path: atari-upndown_subset/train-*
- config_name: atari-venture_newdata
data_files:
- split: train
path: atari-venture_newdata/train-*
- config_name: atari-venture_subset
data_files:
- split: train
path: atari-venture_subset/train-*
- config_name: atari-videopinball_newdata
data_files:
- split: train
path: atari-videopinball_newdata/train-*
- config_name: atari-videopinball_subset
data_files:
- split: train
path: atari-videopinball_subset/train-*
- config_name: atari-wizardofwor_newdata
data_files:
- split: train
path: atari-wizardofwor_newdata/train-*
- config_name: atari-wizardofwor_subset
data_files:
- split: train
path: atari-wizardofwor_subset/train-*
- config_name: atari-yarsrevenge_newdata
data_files:
- split: train
path: atari-yarsrevenge_newdata/train-*
- config_name: atari-yarsrevenge_subset
data_files:
- split: train
path: atari-yarsrevenge_subset/train-*
- config_name: atari-zaxxon_newdata
data_files:
- split: train
path: atari-zaxxon_newdata/train-*
- config_name: atari-zaxxon_subset
data_files:
- split: train
path: atari-zaxxon_subset/train-*
- config_name: babyai-action-obj-door_newdata
data_files:
- split: train
path: babyai-action-obj-door_newdata/train-*
- config_name: babyai-action-obj-door_subset
data_files:
- split: train
path: babyai-action-obj-door_subset/train-*
- config_name: babyai-blocked-unlock-pickup_newdata
data_files:
- split: train
path: babyai-blocked-unlock-pickup_newdata/train-*
- config_name: babyai-blocked-unlock-pickup_subset
data_files:
- split: train
path: babyai-blocked-unlock-pickup_subset/train-*
- config_name: babyai-boss-level-no-unlock_newdata
data_files:
- split: train
path: babyai-boss-level-no-unlock_newdata/train-*
- config_name: babyai-boss-level-no-unlock_subset
data_files:
- split: train
path: babyai-boss-level-no-unlock_subset/train-*
- config_name: babyai-boss-level_newdata
data_files:
- split: train
path: babyai-boss-level_newdata/train-*
- config_name: babyai-boss-level_subset
data_files:
- split: train
path: babyai-boss-level_subset/train-*
- config_name: babyai-find-obj-s5_newdata
data_files:
- split: train
path: babyai-find-obj-s5_newdata/train-*
- config_name: babyai-find-obj-s5_subset
data_files:
- split: train
path: babyai-find-obj-s5_subset/train-*
- config_name: babyai-go-to-door_newdata
data_files:
- split: train
path: babyai-go-to-door_newdata/train-*
- config_name: babyai-go-to-door_subset
data_files:
- split: train
path: babyai-go-to-door_subset/train-*
- config_name: babyai-go-to-imp-unlock_newdata
data_files:
- split: train
path: babyai-go-to-imp-unlock_newdata/train-*
- config_name: babyai-go-to-imp-unlock_subset
data_files:
- split: train
path: babyai-go-to-imp-unlock_subset/train-*
- config_name: babyai-go-to-local_newdata
data_files:
- split: train
path: babyai-go-to-local_newdata/train-*
- config_name: babyai-go-to-local_subset
data_files:
- split: train
path: babyai-go-to-local_subset/train-*
- config_name: babyai-go-to-obj-door_newdata
data_files:
- split: train
path: babyai-go-to-obj-door_newdata/train-*
- config_name: babyai-go-to-obj-door_subset
data_files:
- split: train
path: babyai-go-to-obj-door_subset/train-*
- config_name: babyai-go-to-obj_newdata
data_files:
- split: train
path: babyai-go-to-obj_newdata/train-*
- config_name: babyai-go-to-obj_subset
data_files:
- split: train
path: babyai-go-to-obj_subset/train-*
- config_name: babyai-go-to-red-ball-grey_newdata
data_files:
- split: train
path: babyai-go-to-red-ball-grey_newdata/train-*
- config_name: babyai-go-to-red-ball-grey_subset
data_files:
- split: train
path: babyai-go-to-red-ball-grey_subset/train-*
- config_name: babyai-go-to-red-ball-no-dists_newdata
data_files:
- split: train
path: babyai-go-to-red-ball-no-dists_newdata/train-*
- config_name: babyai-go-to-red-ball-no-dists_subset
data_files:
- split: train
path: babyai-go-to-red-ball-no-dists_subset/train-*
- config_name: babyai-go-to-red-ball_newdata
data_files:
- split: train
path: babyai-go-to-red-ball_newdata/train-*
- config_name: babyai-go-to-red-ball_subset
data_files:
- split: train
path: babyai-go-to-red-ball_subset/train-*
- config_name: babyai-go-to-red-blue-ball_newdata
data_files:
- split: train
path: babyai-go-to-red-blue-ball_newdata/train-*
- config_name: babyai-go-to-red-blue-ball_subset
data_files:
- split: train
path: babyai-go-to-red-blue-ball_subset/train-*
- config_name: babyai-go-to-seq_newdata
data_files:
- split: train
path: babyai-go-to-seq_newdata/train-*
- config_name: babyai-go-to-seq_subset
data_files:
- split: train
path: babyai-go-to-seq_subset/train-*
- config_name: babyai-go-to_newdata
data_files:
- split: train
path: babyai-go-to_newdata/train-*
- config_name: babyai-go-to_subset
data_files:
- split: train
path: babyai-go-to_subset/train-*
- config_name: babyai-key-corridor_newdata
data_files:
- split: train
path: babyai-key-corridor_newdata/train-*
- config_name: babyai-key-corridor_subset
data_files:
- split: train
path: babyai-key-corridor_subset/train-*
- config_name: babyai-mini-boss-level_newdata
data_files:
- split: train
path: babyai-mini-boss-level_newdata/train-*
- config_name: babyai-mini-boss-level_subset
data_files:
- split: train
path: babyai-mini-boss-level_subset/train-*
- config_name: babyai-move-two-across-s8n9_newdata
data_files:
- split: train
path: babyai-move-two-across-s8n9_newdata/train-*
- config_name: babyai-move-two-across-s8n9_subset
data_files:
- split: train
path: babyai-move-two-across-s8n9_subset/train-*
- config_name: babyai-one-room-s8_newdata
data_files:
- split: train
path: babyai-one-room-s8_newdata/train-*
- config_name: babyai-one-room-s8_subset
data_files:
- split: train
path: babyai-one-room-s8_subset/train-*
- config_name: babyai-open-door_newdata
data_files:
- split: train
path: babyai-open-door_newdata/train-*
- config_name: babyai-open-door_subset
data_files:
- split: train
path: babyai-open-door_subset/train-*
- config_name: babyai-open-doors-order-n4_newdata
data_files:
- split: train
path: babyai-open-doors-order-n4_newdata/train-*
- config_name: babyai-open-doors-order-n4_subset
data_files:
- split: train
path: babyai-open-doors-order-n4_subset/train-*
- config_name: babyai-open-red-door_newdata
data_files:
- split: train
path: babyai-open-red-door_newdata/train-*
- config_name: babyai-open-red-door_subset
data_files:
- split: train
path: babyai-open-red-door_subset/train-*
- config_name: babyai-open-two-doors_newdata
data_files:
- split: train
path: babyai-open-two-doors_newdata/train-*
- config_name: babyai-open-two-doors_subset
data_files:
- split: train
path: babyai-open-two-doors_subset/train-*
- config_name: babyai-open_newdata
data_files:
- split: train
path: babyai-open_newdata/train-*
- config_name: babyai-open_subset
data_files:
- split: train
path: babyai-open_subset/train-*
- config_name: babyai-pickup-above_newdata
data_files:
- split: train
path: babyai-pickup-above_newdata/train-*
- config_name: babyai-pickup-above_subset
data_files:
- split: train
path: babyai-pickup-above_subset/train-*
- config_name: babyai-pickup-dist_newdata
data_files:
- split: train
path: babyai-pickup-dist_newdata/train-*
- config_name: babyai-pickup-dist_subset
data_files:
- split: train
path: babyai-pickup-dist_subset/train-*
- config_name: babyai-pickup-loc_newdata
data_files:
- split: train
path: babyai-pickup-loc_newdata/train-*
- config_name: babyai-pickup-loc_subset
data_files:
- split: train
path: babyai-pickup-loc_subset/train-*
- config_name: babyai-pickup_newdata
data_files:
- split: train
path: babyai-pickup_newdata/train-*
- config_name: babyai-pickup_subset
data_files:
- split: train
path: babyai-pickup_subset/train-*
- config_name: babyai-put-next-local_newdata
data_files:
- split: train
path: babyai-put-next-local_newdata/train-*
- config_name: babyai-put-next-local_subset
data_files:
- split: train
path: babyai-put-next-local_subset/train-*
- config_name: babyai-put-next_newdata
data_files:
- split: train
path: babyai-put-next_newdata/train-*
- config_name: babyai-put-next_subset
data_files:
- split: train
path: babyai-put-next_subset/train-*
- config_name: babyai-synth-loc_newdata
data_files:
- split: train
path: babyai-synth-loc_newdata/train-*
- config_name: babyai-synth-loc_subset
data_files:
- split: train
path: babyai-synth-loc_subset/train-*
- config_name: babyai-synth-seq_newdata
data_files:
- split: train
path: babyai-synth-seq_newdata/train-*
- config_name: babyai-synth-seq_subset
data_files:
- split: train
path: babyai-synth-seq_subset/train-*
- config_name: babyai-synth_newdata
data_files:
- split: train
path: babyai-synth_newdata/train-*
- config_name: babyai-synth_subset
data_files:
- split: train
path: babyai-synth_subset/train-*
- config_name: babyai-unblock-pickup_newdata
data_files:
- split: train
path: babyai-unblock-pickup_newdata/train-*
- config_name: babyai-unblock-pickup_subset
data_files:
- split: train
path: babyai-unblock-pickup_subset/train-*
- config_name: babyai-unlock-local_newdata
data_files:
- split: train
path: babyai-unlock-local_newdata/train-*
- config_name: babyai-unlock-local_subset
data_files:
- split: train
path: babyai-unlock-local_subset/train-*
- config_name: babyai-unlock-pickup_newdata
data_files:
- split: train
path: babyai-unlock-pickup_newdata/train-*
- config_name: babyai-unlock-pickup_subset
data_files:
- split: train
path: babyai-unlock-pickup_subset/train-*
- config_name: babyai-unlock-to-unlock_newdata
data_files:
- split: train
path: babyai-unlock-to-unlock_newdata/train-*
- config_name: babyai-unlock-to-unlock_subset
data_files:
- split: train
path: babyai-unlock-to-unlock_subset/train-*
- config_name: babyai-unlock_newdata
data_files:
- split: train
path: babyai-unlock_newdata/train-*
- config_name: babyai-unlock_subset
data_files:
- split: train
path: babyai-unlock_subset/train-*
- config_name: metaworld-assembly_newdata
data_files:
- split: train
path: metaworld-assembly_newdata/train-*
- config_name: metaworld-assembly_subset
data_files:
- split: train
path: metaworld-assembly_subset/train-*
- config_name: metaworld-basketball_newdata
data_files:
- split: train
path: metaworld-basketball_newdata/train-*
- config_name: metaworld-basketball_subset
data_files:
- split: train
path: metaworld-basketball_subset/train-*
- config_name: metaworld-bin-picking_newdata
data_files:
- split: train
path: metaworld-bin-picking_newdata/train-*
- config_name: metaworld-box-close_newdata
data_files:
- split: train
path: metaworld-box-close_newdata/train-*
- config_name: metaworld-button-press-topdown-wall_newdata
data_files:
- split: train
path: metaworld-button-press-topdown-wall_newdata/train-*
- config_name: metaworld-button-press-topdown-wall_subset
data_files:
- split: train
path: metaworld-button-press-topdown-wall_subset/train-*
- config_name: metaworld-button-press-topdown_newdata
data_files:
- split: train
path: metaworld-button-press-topdown_newdata/train-*
- config_name: metaworld-button-press-topdown_subset
data_files:
- split: train
path: metaworld-button-press-topdown_subset/train-*
- config_name: metaworld-button-press-wall_newdata
data_files:
- split: train
path: metaworld-button-press-wall_newdata/train-*
- config_name: metaworld-button-press-wall_subset
data_files:
- split: train
path: metaworld-button-press-wall_subset/train-*
- config_name: metaworld-button-press_newdata
data_files:
- split: train
path: metaworld-button-press_newdata/train-*
- config_name: metaworld-button-press_subset
data_files:
- split: train
path: metaworld-button-press_subset/train-*
- config_name: metaworld-coffee-button_newdata
data_files:
- split: train
path: metaworld-coffee-button_newdata/train-*
- config_name: metaworld-coffee-button_subset
data_files:
- split: train
path: metaworld-coffee-button_subset/train-*
- config_name: metaworld-coffee-pull_newdata
data_files:
- split: train
path: metaworld-coffee-pull_newdata/train-*
- config_name: metaworld-coffee-pull_subset
data_files:
- split: train
path: metaworld-coffee-pull_subset/train-*
- config_name: metaworld-coffee-push_newdata
data_files:
- split: train
path: metaworld-coffee-push_newdata/train-*
- config_name: metaworld-coffee-push_subset
data_files:
- split: train
path: metaworld-coffee-push_subset/train-*
- config_name: metaworld-dial-turn_newdata
data_files:
- split: train
path: metaworld-dial-turn_newdata/train-*
- config_name: metaworld-dial-turn_subset
data_files:
- split: train
path: metaworld-dial-turn_subset/train-*
- config_name: metaworld-disassemble_newdata
data_files:
- split: train
path: metaworld-disassemble_newdata/train-*
- config_name: metaworld-disassemble_subset
data_files:
- split: train
path: metaworld-disassemble_subset/train-*
- config_name: metaworld-door-close_newdata
data_files:
- split: train
path: metaworld-door-close_newdata/train-*
- config_name: metaworld-door-close_subset
data_files:
- split: train
path: metaworld-door-close_subset/train-*
- config_name: metaworld-door-lock_newdata
data_files:
- split: train
path: metaworld-door-lock_newdata/train-*
- config_name: metaworld-door-open_newdata
data_files:
- split: train
path: metaworld-door-open_newdata/train-*
- config_name: metaworld-door-open_subset
data_files:
- split: train
path: metaworld-door-open_subset/train-*
- config_name: metaworld-door-unlock_newdata
data_files:
- split: train
path: metaworld-door-unlock_newdata/train-*
- config_name: metaworld-drawer-close_newdata
data_files:
- split: train
path: metaworld-drawer-close_newdata/train-*
- config_name: metaworld-drawer-close_subset
data_files:
- split: train
path: metaworld-drawer-close_subset/train-*
- config_name: metaworld-drawer-open_newdata
data_files:
- split: train
path: metaworld-drawer-open_newdata/train-*
- config_name: metaworld-drawer-open_subset
data_files:
- split: train
path: metaworld-drawer-open_subset/train-*
- config_name: metaworld-faucet-close_newdata
data_files:
- split: train
path: metaworld-faucet-close_newdata/train-*
- config_name: metaworld-faucet-close_subset
data_files:
- split: train
path: metaworld-faucet-close_subset/train-*
- config_name: metaworld-faucet-open_newdata
data_files:
- split: train
path: metaworld-faucet-open_newdata/train-*
- config_name: metaworld-faucet-open_subset
data_files:
- split: train
path: metaworld-faucet-open_subset/train-*
- config_name: metaworld-hammer_newdata
data_files:
- split: train
path: metaworld-hammer_newdata/train-*
- config_name: metaworld-hammer_subset
data_files:
- split: train
path: metaworld-hammer_subset/train-*
- config_name: metaworld-handle-press-side_newdata
data_files:
- split: train
path: metaworld-handle-press-side_newdata/train-*
- config_name: metaworld-handle-press-side_subset
data_files:
- split: train
path: metaworld-handle-press-side_subset/train-*
- config_name: metaworld-handle-press_newdata
data_files:
- split: train
path: metaworld-handle-press_newdata/train-*
- config_name: metaworld-handle-press_subset
data_files:
- split: train
path: metaworld-handle-press_subset/train-*
- config_name: metaworld-handle-pull-side_newdata
data_files:
- split: train
path: metaworld-handle-pull-side_newdata/train-*
- config_name: metaworld-handle-pull-side_subset
data_files:
- split: train
path: metaworld-handle-pull-side_subset/train-*
- config_name: metaworld-handle-pull_newdata
data_files:
- split: train
path: metaworld-handle-pull_newdata/train-*
- config_name: metaworld-handle-pull_subset
data_files:
- split: train
path: metaworld-handle-pull_subset/train-*
- config_name: metaworld-lever-pull_newdata
data_files:
- split: train
path: metaworld-lever-pull_newdata/train-*
- config_name: metaworld-lever-pull_subset
data_files:
- split: train
path: metaworld-lever-pull_subset/train-*
- config_name: metaworld-peg-insert-side_newdata
data_files:
- split: train
path: metaworld-peg-insert-side_newdata/train-*
- config_name: metaworld-peg-insert-side_subset
data_files:
- split: train
path: metaworld-peg-insert-side_subset/train-*
- config_name: metaworld-peg-unplug-side_newdata
data_files:
- split: train
path: metaworld-peg-unplug-side_newdata/train-*
- config_name: metaworld-peg-unplug-side_subset
data_files:
- split: train
path: metaworld-peg-unplug-side_subset/train-*
- config_name: metaworld-pick-out-of-hole_newdata
data_files:
- split: train
path: metaworld-pick-out-of-hole_newdata/train-*
- config_name: metaworld-pick-out-of-hole_subset
data_files:
- split: train
path: metaworld-pick-out-of-hole_subset/train-*
- config_name: metaworld-pick-place-wall_newdata
data_files:
- split: train
path: metaworld-pick-place-wall_newdata/train-*
- config_name: metaworld-pick-place-wall_subset
data_files:
- split: train
path: metaworld-pick-place-wall_subset/train-*
- config_name: metaworld-pick-place_newdata
data_files:
- split: train
path: metaworld-pick-place_newdata/train-*
- config_name: metaworld-pick-place_subset
data_files:
- split: train
path: metaworld-pick-place_subset/train-*
- config_name: metaworld-plate-slide-back-side_newdata
data_files:
- split: train
path: metaworld-plate-slide-back-side_newdata/train-*
- config_name: metaworld-plate-slide-back-side_subset
data_files:
- split: train
path: metaworld-plate-slide-back-side_subset/train-*
- config_name: metaworld-plate-slide-back_newdata
data_files:
- split: train
path: metaworld-plate-slide-back_newdata/train-*
- config_name: metaworld-plate-slide-back_subset
data_files:
- split: train
path: metaworld-plate-slide-back_subset/train-*
- config_name: metaworld-plate-slide-side_newdata
data_files:
- split: train
path: metaworld-plate-slide-side_newdata/train-*
- config_name: metaworld-plate-slide-side_subset
data_files:
- split: train
path: metaworld-plate-slide-side_subset/train-*
- config_name: metaworld-plate-slide_newdata
data_files:
- split: train
path: metaworld-plate-slide_newdata/train-*
- config_name: metaworld-plate-slide_subset
data_files:
- split: train
path: metaworld-plate-slide_subset/train-*
- config_name: metaworld-push-back_newdata
data_files:
- split: train
path: metaworld-push-back_newdata/train-*
- config_name: metaworld-push-back_subset
data_files:
- split: train
path: metaworld-push-back_subset/train-*
- config_name: metaworld-push-wall_newdata
data_files:
- split: train
path: metaworld-push-wall_newdata/train-*
- config_name: metaworld-push-wall_subset
data_files:
- split: train
path: metaworld-push-wall_subset/train-*
- config_name: metaworld-push_newdata
data_files:
- split: train
path: metaworld-push_newdata/train-*
- config_name: metaworld-push_subset
data_files:
- split: train
path: metaworld-push_subset/train-*
- config_name: metaworld-reach-wall_newdata
data_files:
- split: train
path: metaworld-reach-wall_newdata/train-*
- config_name: metaworld-reach-wall_subset
data_files:
- split: train
path: metaworld-reach-wall_subset/train-*
- config_name: metaworld-reach_newdata
data_files:
- split: train
path: metaworld-reach_newdata/train-*
- config_name: metaworld-reach_subset
data_files:
- split: train
path: metaworld-reach_subset/train-*
- config_name: metaworld-shelf-place_newdata
data_files:
- split: train
path: metaworld-shelf-place_newdata/train-*
- config_name: metaworld-shelf-place_subset
data_files:
- split: train
path: metaworld-shelf-place_subset/train-*
- config_name: metaworld-soccer_newdata
data_files:
- split: train
path: metaworld-soccer_newdata/train-*
- config_name: metaworld-soccer_subset
data_files:
- split: train
path: metaworld-soccer_subset/train-*
- config_name: metaworld-stick-pull_newdata
data_files:
- split: train
path: metaworld-stick-pull_newdata/train-*
- config_name: metaworld-stick-pull_subset
data_files:
- split: train
path: metaworld-stick-pull_subset/train-*
- config_name: metaworld-stick-push_newdata
data_files:
- split: train
path: metaworld-stick-push_newdata/train-*
- config_name: metaworld-stick-push_subset
data_files:
- split: train
path: metaworld-stick-push_subset/train-*
- config_name: metaworld-sweep-into_newdata
data_files:
- split: train
path: metaworld-sweep-into_newdata/train-*
- config_name: metaworld-sweep-into_subset
data_files:
- split: train
path: metaworld-sweep-into_subset/train-*
- config_name: metaworld-sweep_newdata
data_files:
- split: train
path: metaworld-sweep_newdata/train-*
- config_name: metaworld-sweep_subset
data_files:
- split: train
path: metaworld-sweep_subset/train-*
- config_name: metaworld-window-close_newdata
data_files:
- split: train
path: metaworld-window-close_newdata/train-*
- config_name: metaworld-window-close_subset
data_files:
- split: train
path: metaworld-window-close_subset/train-*
- config_name: metaworld-window-open_newdata
data_files:
- split: train
path: metaworld-window-open_newdata/train-*
- config_name: metaworld-window-open_subset
data_files:
- split: train
path: metaworld-window-open_subset/train-*
- config_name: mujoco-ant_newdata
data_files:
- split: train
path: mujoco-ant_newdata/train-*
- config_name: mujoco-ant_subset
data_files:
- split: train
path: mujoco-ant_subset/train-*
- config_name: mujoco-doublependulum_newdata
data_files:
- split: train
path: mujoco-doublependulum_newdata/train-*
- config_name: mujoco-doublependulum_subset
data_files:
- split: train
path: mujoco-doublependulum_subset/train-*
- config_name: mujoco-halfcheetah_newdata
data_files:
- split: train
path: mujoco-halfcheetah_newdata/train-*
- config_name: mujoco-hopper_newdata
data_files:
- split: train
path: mujoco-hopper_newdata/train-*
- config_name: mujoco-humanoid_newdata
data_files:
- split: train
path: mujoco-humanoid_newdata/train-*
- config_name: mujoco-humanoid_subset
data_files:
- split: train
path: mujoco-humanoid_subset/train-*
- config_name: mujoco-pendulum_newdata
data_files:
- split: train
path: mujoco-pendulum_newdata/train-*
- config_name: mujoco-pendulum_subset
data_files:
- split: train
path: mujoco-pendulum_subset/train-*
- config_name: mujoco-pusher_newdata
data_files:
- split: train
path: mujoco-pusher_newdata/train-*
- config_name: mujoco-pusher_subset
data_files:
- split: train
path: mujoco-pusher_subset/train-*
- config_name: mujoco-reacher_newdata
data_files:
- split: train
path: mujoco-reacher_newdata/train-*
- config_name: mujoco-reacher_subset
data_files:
- split: train
path: mujoco-reacher_subset/train-*
- config_name: mujoco-standup_newdata
data_files:
- split: train
path: mujoco-standup_newdata/train-*
- config_name: mujoco-standup_subset
data_files:
- split: train
path: mujoco-standup_subset/train-*
- config_name: mujoco-swimmer_newdata
data_files:
- split: train
path: mujoco-swimmer_newdata/train-*
- config_name: mujoco-swimmer_subset
data_files:
- split: train
path: mujoco-swimmer_subset/train-*
- config_name: mujoco-walker_newdata
data_files:
- split: train
path: mujoco-walker_newdata/train-*
- config_name: mujoco-walker_subset
data_files:
- split: train
path: mujoco-walker_subset/train-*
---
|
mshah1/speech_robust_bench | mshah1 | "2025-02-23T18:32:01Z" | 11,592 | 3 | [
"size_categories:1M<n<10M",
"modality:audio",
"modality:text",
"region:us"
] | null | "2024-01-21T01:39:08Z" | ---
dataset_info:
- config_name: accented_cv
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: age
dtype: string
- name: gender
dtype: string
- name: accents
dtype: string
- name: locale
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 55407854.085
num_examples: 1355
- name: test.clean
num_bytes: 25593824.0
num_examples: 640
download_size: 78598662
dataset_size: 81001678.08500001
- config_name: accented_cv_es
features:
- name: audio
dtype: audio
- name: accent
dtype: string
- name: text
dtype: string
- name: gender
dtype: string
- name: age
dtype: string
- name: locale
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 65868440.963
num_examples: 1483
download_size: 60557913
dataset_size: 65868440.963
- config_name: accented_cv_fr
features:
- name: file_name
dtype: string
- name: accent
dtype: string
- name: text
dtype: string
- name: gender
dtype: string
- name: age
dtype: string
- name: locale
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 337528
num_examples: 2171
download_size: 148493
dataset_size: 337528
- config_name: chime
features:
- name: audio
dtype: audio
- name: end_time
dtype: string
- name: start_time
dtype: string
- name: speaker
dtype: string
- name: ref
dtype: string
- name: location
dtype: string
- name: session_id
dtype: string
- name: text
dtype: string
splits:
- name: farfield
num_bytes: 521160936.31
num_examples: 6535
- name: nearfield
num_bytes: 1072274621.0799999
num_examples: 6535
download_size: 1532887016
dataset_size: 1593435557.3899999
- config_name: in-the-wild
features:
- name: audio
dtype: audio
- name: end_time
dtype: string
- name: start_time
dtype: string
- name: speaker
dtype: string
- name: ref
dtype: string
- name: location
dtype: string
- name: session_id
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: farfield
num_bytes: 521363521.31
num_examples: 6535
- name: nearfield
num_bytes: 1072477206.0799999
num_examples: 6535
download_size: 1533124839
dataset_size: 1593840727.3899999
- config_name: in-the-wild-AMI
features:
- name: meeting_id
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: begin_time
dtype: float32
- name: end_time
dtype: float32
- name: microphone_id
dtype: string
- name: speaker_id
dtype: string
splits:
- name: nearfield
num_bytes: 1382749390.9785259
num_examples: 6584
- name: farfield
num_bytes: 1040706691.1008185
num_examples: 6584
download_size: 2164898498
dataset_size: 2423456082.0793443
- config_name: in-the-wild-ami
features:
- name: meeting_id
dtype: string
- name: audio_id
dtype: string
- name: text
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: begin_time
dtype: float32
- name: end_time
dtype: float32
- name: microphone_id
dtype: string
- name: speaker_id
dtype: string
splits:
- name: nearfield
num_bytes: 1382749390.9785259
num_examples: 6584
- name: farfield
num_bytes: 1040706691.1008185
num_examples: 6584
download_size: 2164900274
dataset_size: 2423456082.0793443
- config_name: librispeech_asr-test.clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: speedup.1
num_bytes: 498896619.34
num_examples: 2620
- name: speedup.2
num_bytes: 415901075.34
num_examples: 2620
- name: speedup.3
num_bytes: 356617835.34
num_examples: 2620
- name: speedup.4
num_bytes: 312152811.34
num_examples: 2620
- name: slowdown.1
num_bytes: 712320343.34
num_examples: 2620
- name: slowdown.2
num_bytes: 830887339.34
num_examples: 2620
- name: slowdown.3
num_bytes: 996880127.34
num_examples: 2620
- name: slowdown.4
num_bytes: 1245871847.34
num_examples: 2620
- name: pitch_up.3
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_up.4
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_down.1
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_down.2
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_down.3
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_down.4
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_up.1
num_bytes: 623392458.5
num_examples: 2620
- name: pitch_up.2
num_bytes: 623392458.5
num_examples: 2620
- name: resample.1
num_bytes: 623392535.34
num_examples: 2620
- name: resample.2
num_bytes: 623392535.34
num_examples: 2620
- name: resample.3
num_bytes: 623392579.34
num_examples: 2620
- name: resample.4
num_bytes: 623392623.34
num_examples: 2620
- name: voice_conversion.4
num_bytes: 799852214.5
num_examples: 2620
- name: voice_conversion.3
num_bytes: 580185782.5
num_examples: 2620
- name: voice_conversion.1
num_bytes: 589259446.5
num_examples: 2620
- name: voice_conversion.2
num_bytes: 571175606.5
num_examples: 2620
- name: gain.1
num_bytes: 623392467.34
num_examples: 2620
- name: gain.2
num_bytes: 623392467.34
num_examples: 2620
- name: gain.3
num_bytes: 623392467.34
num_examples: 2620
- name: echo.1
num_bytes: 633872467.34
num_examples: 2620
- name: echo.2
num_bytes: 644352467.34
num_examples: 2620
- name: echo.3
num_bytes: 665312467.34
num_examples: 2620
- name: echo.4
num_bytes: 707232467.34
num_examples: 2620
- name: phaser.1
num_bytes: 623392467.34
num_examples: 2620
- name: phaser.2
num_bytes: 623392467.34
num_examples: 2620
- name: phaser.3
num_bytes: 623392467.34
num_examples: 2620
- name: tempo_up.1
num_bytes: 498896595.34
num_examples: 2620
- name: tempo_up.2
num_bytes: 415899351.34
num_examples: 2620
- name: tempo_up.3
num_bytes: 356615595.34
num_examples: 2620
- name: tempo_up.4
num_bytes: 312152811.34
num_examples: 2620
- name: tempo_down.1
num_bytes: 712318083.34
num_examples: 2620
- name: tempo_down.2
num_bytes: 830885583.34
num_examples: 2620
- name: tempo_down.3
num_bytes: 996880103.34
num_examples: 2620
- name: tempo_down.4
num_bytes: 1245871847.34
num_examples: 2620
- name: gain.4
num_bytes: 623392467.34
num_examples: 2620
- name: phaser.4
num_bytes: 623392467.34
num_examples: 2620
- name: lowpass.1
num_bytes: 623392467.34
num_examples: 2620
- name: lowpass.2
num_bytes: 623392467.34
num_examples: 2620
- name: lowpass.3
num_bytes: 623392467.34
num_examples: 2620
- name: lowpass.4
num_bytes: 623392467.34
num_examples: 2620
- name: highpass.1
num_bytes: 623392467.34
num_examples: 2620
- name: highpass.2
num_bytes: 623392467.34
num_examples: 2620
- name: highpass.3
num_bytes: 623392467.34
num_examples: 2620
- name: highpass.4
num_bytes: 623392467.34
num_examples: 2620
- name: voice_conversion_vctk.1
num_bytes: 495165825.88
num_examples: 2620
- name: universal_adv.1
num_bytes: 623392467.34
num_examples: 2620
- name: rir.1
num_bytes: 705636818.5
num_examples: 2620
- name: rir.2
num_bytes: 744484818.5
num_examples: 2620
- name: rir.3
num_bytes: 758740818.5
num_examples: 2620
- name: rir.4
num_bytes: 776116818.5
num_examples: 2620
- name: gnoise.1
num_bytes: 623392455.88
num_examples: 2620
- name: gnoise.2
num_bytes: 623392455.88
num_examples: 2620
- name: gnoise.3
num_bytes: 623392455.88
num_examples: 2620
- name: gnoise.4
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_esc50.1
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_esc50.2
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_esc50.3
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_esc50.4
num_bytes: 623392455.88
num_examples: 2620
- name: music.1
num_bytes: 623392455.88
num_examples: 2620
- name: music.2
num_bytes: 623392455.88
num_examples: 2620
- name: music.3
num_bytes: 623392455.88
num_examples: 2620
- name: music.4
num_bytes: 623392455.88
num_examples: 2620
- name: crosstalk.1
num_bytes: 623392455.88
num_examples: 2620
- name: crosstalk.2
num_bytes: 623392455.88
num_examples: 2620
- name: crosstalk.3
num_bytes: 623392455.88
num_examples: 2620
- name: crosstalk.4
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_musan.1
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_musan.2
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_musan.3
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_musan.4
num_bytes: 623392455.88
num_examples: 2620
- name: real_rir.1
num_bytes: 638169615.88
num_examples: 2620
- name: real_rir.2
num_bytes: 694281819.88
num_examples: 2620
- name: real_rir.3
num_bytes: 713200537.88
num_examples: 2620
- name: real_rir.4
num_bytes: 1515177725.88
num_examples: 2620
- name: env_noise.1
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise.2
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise.3
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise.4
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_wham.1
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_wham.2
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_wham.3
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_wham.4
num_bytes: 623392455.88
num_examples: 2620
- name: tremolo.1
num_bytes: 623392455.88
num_examples: 2620
- name: tremolo.2
num_bytes: 623392455.88
num_examples: 2620
- name: tremolo.3
num_bytes: 623392455.88
num_examples: 2620
- name: tremolo.4
num_bytes: 623392455.88
num_examples: 2620
- name: treble.1
num_bytes: 623392455.88
num_examples: 2620
- name: treble.2
num_bytes: 623392455.88
num_examples: 2620
- name: treble.3
num_bytes: 623392455.88
num_examples: 2620
- name: treble.4
num_bytes: 623392455.88
num_examples: 2620
- name: bass.1
num_bytes: 623392455.88
num_examples: 2620
- name: bass.2
num_bytes: 623392455.88
num_examples: 2620
- name: bass.3
num_bytes: 623392455.88
num_examples: 2620
- name: bass.4
num_bytes: 623392455.88
num_examples: 2620
- name: chorus.1
num_bytes: 626913735.88
num_examples: 2620
- name: chorus.2
num_bytes: 628590535.88
num_examples: 2620
- name: chorus.3
num_bytes: 630267335.88
num_examples: 2620
- name: chorus.4
num_bytes: 631944135.88
num_examples: 2620
- name: None.0
num_bytes: 367982506.42
num_examples: 2620
download_size: 67547733720
dataset_size: 68871044112.51988
- config_name: librispeech_asr-test.clean_pertEval_500_30
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: pert_idx
dtype: int64
splits:
- name: gnoise.1
num_bytes: 3592401090.0
num_examples: 15000
- name: env_noise_esc50.1
num_bytes: 3592401090.0
num_examples: 15000
download_size: 7170899040
dataset_size: 7184802180.0
- config_name: multilingual_librispeech-french_test
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: gnoise.1
num_bytes: 1160858614.324
num_examples: 2426
- name: gnoise.2
num_bytes: 1160858614.324
num_examples: 2426
- name: gnoise.3
num_bytes: 1160858614.324
num_examples: 2426
- name: speedup.1
num_bytes: 928910526.324
num_examples: 2426
- name: speedup.3
num_bytes: 663829084.324
num_examples: 2426
- name: pitch_up.1
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_up.2
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_up.3
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_down.1
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_down.2
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise.1
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise.3
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_wham.1
num_bytes: 1160858614.324
num_examples: 2426
- name: slowdown.2
num_bytes: 1547440398.324
num_examples: 2426
- name: real_rir.3
num_bytes: 1241772582.324
num_examples: 2426
- name: env_noise.2
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_wham.2
num_bytes: 1160858614.324
num_examples: 2426
- name: speedup.2
num_bytes: 774280064.324
num_examples: 2426
- name: slowdown.1
num_bytes: 1326537936.324
num_examples: 2426
- name: slowdown.3
num_bytes: 1856702974.324
num_examples: 2426
- name: env_noise_esc50.1
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_esc50.2
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_esc50.3
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_musan.1
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_musan.2
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_musan.3
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_wham.3
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_down.3
num_bytes: 1160858614.324
num_examples: 2426
- name: rir.1
num_bytes: 1235965442.324
num_examples: 2426
- name: rir.2
num_bytes: 1273085442.324
num_examples: 2426
- name: rir.3
num_bytes: 1284653442.324
num_examples: 2426
- name: real_rir.1
num_bytes: 1174422106.324
num_examples: 2426
- name: real_rir.2
num_bytes: 1226129514.324
num_examples: 2426
- name: resample.1
num_bytes: 1160858656.324
num_examples: 2426
- name: resample.2
num_bytes: 1160858642.324
num_examples: 2426
- name: resample.3
num_bytes: 1160858694.324
num_examples: 2426
- name: gain.1
num_bytes: 1160858614.324
num_examples: 2426
- name: gain.2
num_bytes: 1160858614.324
num_examples: 2426
- name: gain.3
num_bytes: 1160858614.324
num_examples: 2426
- name: echo.1
num_bytes: 1170562614.324
num_examples: 2426
- name: echo.2
num_bytes: 1180266614.324
num_examples: 2426
- name: echo.3
num_bytes: 1199674614.324
num_examples: 2426
- name: phaser.1
num_bytes: 1160858614.324
num_examples: 2426
- name: phaser.2
num_bytes: 1160858614.324
num_examples: 2426
- name: phaser.3
num_bytes: 1160858614.324
num_examples: 2426
- name: tempo_up.1
num_bytes: 928910510.324
num_examples: 2426
- name: tempo_up.2
num_bytes: 774278396.324
num_examples: 2426
- name: tempo_up.3
num_bytes: 663826914.324
num_examples: 2426
- name: tempo_down.1
num_bytes: 1326535834.324
num_examples: 2426
- name: tempo_down.2
num_bytes: 1547438832.324
num_examples: 2426
- name: tempo_down.3
num_bytes: 1856702944.324
num_examples: 2426
- name: lowpass.1
num_bytes: 1160858614.324
num_examples: 2426
- name: lowpass.2
num_bytes: 1160858614.324
num_examples: 2426
- name: lowpass.3
num_bytes: 1160858614.324
num_examples: 2426
- name: highpass.1
num_bytes: 1160858614.324
num_examples: 2426
- name: highpass.2
num_bytes: 1160858614.324
num_examples: 2426
- name: highpass.3
num_bytes: 1160858614.324
num_examples: 2426
- name: music.1
num_bytes: 1160858614.324
num_examples: 2426
- name: music.2
num_bytes: 1160858614.324
num_examples: 2426
- name: music.3
num_bytes: 1160858614.324
num_examples: 2426
- name: crosstalk.1
num_bytes: 1160858614.324
num_examples: 2426
- name: crosstalk.2
num_bytes: 1160858614.324
num_examples: 2426
- name: crosstalk.3
num_bytes: 1160858614.324
num_examples: 2426
- name: tremolo.1
num_bytes: 1160858614.324
num_examples: 2426
- name: tremolo.2
num_bytes: 1160858614.324
num_examples: 2426
- name: tremolo.3
num_bytes: 1160858614.324
num_examples: 2426
- name: treble.1
num_bytes: 1160858614.324
num_examples: 2426
- name: treble.2
num_bytes: 1160858614.324
num_examples: 2426
- name: treble.3
num_bytes: 1160858614.324
num_examples: 2426
- name: bass.1
num_bytes: 1160858614.324
num_examples: 2426
- name: bass.2
num_bytes: 1160858614.324
num_examples: 2426
- name: bass.3
num_bytes: 1160858614.324
num_examples: 2426
- name: chorus.1
num_bytes: 1164119158.324
num_examples: 2426
- name: chorus.2
num_bytes: 1165671798.324
num_examples: 2426
- name: chorus.3
num_bytes: 1167224438.324
num_examples: 2426
- name: gnoise.4
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise.4
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_esc50.4
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_musan.4
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_wham.4
num_bytes: 1160858614.324
num_examples: 2426
- name: speedup.4
num_bytes: 580988352.324
num_examples: 2426
- name: slowdown.4
num_bytes: 2320599166.324
num_examples: 2426
- name: pitch_up.4
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_down.4
num_bytes: 1160858614.324
num_examples: 2426
- name: rir.4
num_bytes: 1302669442.324
num_examples: 2426
- name: real_rir.4
num_bytes: 2020765820.324
num_examples: 2426
- name: resample.4
num_bytes: 1160858814.324
num_examples: 2426
- name: gain.4
num_bytes: 1160858614.324
num_examples: 2426
- name: echo.4
num_bytes: 1238490614.324
num_examples: 2426
- name: phaser.4
num_bytes: 1160858614.324
num_examples: 2426
- name: tempo_up.4
num_bytes: 580988352.324
num_examples: 2426
- name: tempo_down.4
num_bytes: 2320599166.324
num_examples: 2426
- name: lowpass.4
num_bytes: 1160858614.324
num_examples: 2426
- name: highpass.4
num_bytes: 1160858614.324
num_examples: 2426
- name: music.4
num_bytes: 1160858614.324
num_examples: 2426
- name: crosstalk.4
num_bytes: 1160858614.324
num_examples: 2426
- name: tremolo.4
num_bytes: 1160858614.324
num_examples: 2426
- name: treble.4
num_bytes: 1160858614.324
num_examples: 2426
- name: bass.4
num_bytes: 1160858614.324
num_examples: 2426
- name: chorus.4
num_bytes: 1168777078.324
num_examples: 2426
download_size: 121459263523
dataset_size: 119151206300.40016
- config_name: multilingual_librispeech-german_test
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: gnoise.1
num_bytes: 1648113341.356
num_examples: 3394
- name: gnoise.2
num_bytes: 1648113341.356
num_examples: 3394
- name: gnoise.3
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise.1
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise.2
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise.3
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_esc50.1
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_esc50.2
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_esc50.3
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_musan.1
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_musan.2
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_musan.3
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_wham.1
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_wham.2
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_wham.3
num_bytes: 1648113341.356
num_examples: 3394
- name: speedup.1
num_bytes: 1318802109.356
num_examples: 3394
- name: speedup.2
num_bytes: 1099263673.356
num_examples: 3394
- name: speedup.3
num_bytes: 942449495.356
num_examples: 3394
- name: slowdown.1
num_bytes: 1883338719.356
num_examples: 3394
- name: slowdown.2
num_bytes: 2196967643.356
num_examples: 3394
- name: slowdown.3
num_bytes: 2636047081.356
num_examples: 3394
- name: pitch_up.1
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_up.2
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_up.3
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_down.1
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_down.2
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_down.3
num_bytes: 1648113341.356
num_examples: 3394
- name: rir.1
num_bytes: 1755612473.356
num_examples: 3394
- name: rir.2
num_bytes: 1806508473.356
num_examples: 3394
- name: rir.3
num_bytes: 1821740473.356
num_examples: 3394
- name: real_rir.1
num_bytes: 1666887689.356
num_examples: 3394
- name: real_rir.2
num_bytes: 1738836201.356
num_examples: 3394
- name: real_rir.3
num_bytes: 1764380853.356
num_examples: 3394
- name: resample.1
num_bytes: 1648113369.356
num_examples: 3394
- name: resample.2
num_bytes: 1648113363.356
num_examples: 3394
- name: resample.3
num_bytes: 1648113411.356
num_examples: 3394
- name: gain.1
num_bytes: 1648113341.356
num_examples: 3394
- name: gain.2
num_bytes: 1648113341.356
num_examples: 3394
- name: gain.3
num_bytes: 1648113341.356
num_examples: 3394
- name: echo.1
num_bytes: 1661689341.356
num_examples: 3394
- name: echo.2
num_bytes: 1675265341.356
num_examples: 3394
- name: echo.3
num_bytes: 1702417341.356
num_examples: 3394
- name: phaser.1
num_bytes: 1648113341.356
num_examples: 3394
- name: phaser.2
num_bytes: 1648113341.356
num_examples: 3394
- name: phaser.3
num_bytes: 1648113341.356
num_examples: 3394
- name: tempo_up.1
num_bytes: 1318802103.356
num_examples: 3394
- name: tempo_up.2
num_bytes: 1099261101.356
num_examples: 3394
- name: tempo_up.3
num_bytes: 942446355.356
num_examples: 3394
- name: tempo_down.1
num_bytes: 1883335523.356
num_examples: 3394
- name: tempo_down.2
num_bytes: 2196965581.356
num_examples: 3394
- name: tempo_down.3
num_bytes: 2636047065.356
num_examples: 3394
- name: lowpass.1
num_bytes: 1648113341.356
num_examples: 3394
- name: lowpass.2
num_bytes: 1648113341.356
num_examples: 3394
- name: lowpass.3
num_bytes: 1648113341.356
num_examples: 3394
- name: highpass.1
num_bytes: 1648113341.356
num_examples: 3394
- name: highpass.2
num_bytes: 1648113341.356
num_examples: 3394
- name: highpass.3
num_bytes: 1648113341.356
num_examples: 3394
- name: music.1
num_bytes: 1648113341.356
num_examples: 3394
- name: music.2
num_bytes: 1648113341.356
num_examples: 3394
- name: music.3
num_bytes: 1648113341.356
num_examples: 3394
- name: crosstalk.1
num_bytes: 1648113341.356
num_examples: 3394
- name: crosstalk.2
num_bytes: 1648113341.356
num_examples: 3394
- name: crosstalk.3
num_bytes: 1648113341.356
num_examples: 3394
- name: tremolo.1
num_bytes: 1648113341.356
num_examples: 3394
- name: tremolo.2
num_bytes: 1648113341.356
num_examples: 3394
- name: tremolo.3
num_bytes: 1648113341.356
num_examples: 3394
- name: treble.1
num_bytes: 1648113341.356
num_examples: 3394
- name: treble.2
num_bytes: 1648113341.356
num_examples: 3394
- name: treble.3
num_bytes: 1648113341.356
num_examples: 3394
- name: bass.1
num_bytes: 1648113341.356
num_examples: 3394
- name: bass.2
num_bytes: 1648113341.356
num_examples: 3394
- name: bass.3
num_bytes: 1648113341.356
num_examples: 3394
- name: chorus.1
num_bytes: 1652674877.356
num_examples: 3394
- name: chorus.2
num_bytes: 1654847037.356
num_examples: 3394
- name: chorus.3
num_bytes: 1657019197.356
num_examples: 3394
- name: gnoise.4
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise.4
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_esc50.4
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_musan.4
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_wham.4
num_bytes: 1648113341.356
num_examples: 3394
- name: speedup.4
num_bytes: 824835247.356
num_examples: 3394
- name: slowdown.4
num_bytes: 3294669551.356
num_examples: 3394
- name: pitch_up.4
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_down.4
num_bytes: 1648113341.356
num_examples: 3394
- name: rir.4
num_bytes: 1846956473.356
num_examples: 3394
- name: real_rir.4
num_bytes: 2846504095.356
num_examples: 3394
- name: resample.4
num_bytes: 1648113451.356
num_examples: 3394
- name: gain.4
num_bytes: 1648113341.356
num_examples: 3394
- name: echo.4
num_bytes: 1756721341.356
num_examples: 3394
- name: phaser.4
num_bytes: 1648113341.356
num_examples: 3394
- name: tempo_up.4
num_bytes: 824835247.356
num_examples: 3394
- name: tempo_down.4
num_bytes: 3294669551.356
num_examples: 3394
- name: lowpass.4
num_bytes: 1648113341.356
num_examples: 3394
- name: highpass.4
num_bytes: 1648113341.356
num_examples: 3394
- name: music.4
num_bytes: 1648113341.356
num_examples: 3394
- name: crosstalk.4
num_bytes: 1648113341.356
num_examples: 3394
- name: tremolo.4
num_bytes: 1648113341.356
num_examples: 3394
- name: treble.4
num_bytes: 1648113341.356
num_examples: 3394
- name: bass.4
num_bytes: 1648113341.356
num_examples: 3394
- name: chorus.4
num_bytes: 1659191357.356
num_examples: 3394
download_size: 163104340817
dataset_size: 169131696059.59995
- config_name: multilingual_librispeech-spanish_test
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: None.0
num_bytes: 596762288.01
num_examples: 2385
- name: env_noise.1
num_bytes: 1153485830.17
num_examples: 2385
- name: env_noise.2
num_bytes: 1153485830.17
num_examples: 2385
- name: env_noise.3
num_bytes: 1153485830.17
num_examples: 2385
- name: env_noise.4
num_bytes: 1153485830.17
num_examples: 2385
- name: rir.1
num_bytes: 1268493860.17
num_examples: 2385
- name: rir.2
num_bytes: 1252109860.17
num_examples: 2385
- name: rir.3
num_bytes: 1249517860.17
num_examples: 2385
- name: rir.4
num_bytes: 1222893860.17
num_examples: 2385
- name: speedup.1
num_bytes: 923001764.17
num_examples: 2385
- name: speedup.2
num_bytes: 769347364.17
num_examples: 2385
- name: speedup.3
num_bytes: 659593516.17
num_examples: 2385
- name: speedup.4
num_bytes: 577275652.17
num_examples: 2385
- name: slowdown.1
num_bytes: 1318119422.17
num_examples: 2385
- name: slowdown.2
num_bytes: 1537627530.17
num_examples: 2385
- name: slowdown.3
num_bytes: 1844938056.17
num_examples: 2385
- name: slowdown.4
num_bytes: 2305906194.17
num_examples: 2385
- name: pitch_up.3
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_up.4
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_down.1
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_down.2
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_down.3
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_down.4
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_up.1
num_bytes: 1153485821.72
num_examples: 2385
- name: pitch_up.2
num_bytes: 1153485821.72
num_examples: 2385
- name: resample.2
num_bytes: 1153485842.17
num_examples: 2385
- name: gain.1
num_bytes: 1153485830.17
num_examples: 2385
- name: gain.2
num_bytes: 1153485830.17
num_examples: 2385
- name: gain.3
num_bytes: 1153485830.17
num_examples: 2385
- name: gain.4
num_bytes: 1153485830.17
num_examples: 2385
- name: echo.1
num_bytes: 1163025830.17
num_examples: 2385
- name: echo.2
num_bytes: 1172565830.17
num_examples: 2385
- name: echo.3
num_bytes: 1191645830.17
num_examples: 2385
- name: echo.4
num_bytes: 1229805830.17
num_examples: 2385
- name: tempo_up.1
num_bytes: 923001758.17
num_examples: 2385
- name: tempo_up.2
num_bytes: 769345632.17
num_examples: 2385
- name: tempo_up.3
num_bytes: 659591372.17
num_examples: 2385
- name: tempo_up.4
num_bytes: 577275652.17
num_examples: 2385
- name: tempo_down.1
num_bytes: 1318117252.17
num_examples: 2385
- name: tempo_down.2
num_bytes: 1537626028.17
num_examples: 2385
- name: tempo_down.3
num_bytes: 1844938048.17
num_examples: 2385
- name: tempo_down.4
num_bytes: 2305906194.17
num_examples: 2385
- name: phaser.1
num_bytes: 1153485830.17
num_examples: 2385
- name: phaser.2
num_bytes: 1153485830.17
num_examples: 2385
- name: phaser.3
num_bytes: 1153485830.17
num_examples: 2385
- name: phaser.4
num_bytes: 1153485830.17
num_examples: 2385
- name: resample.1
num_bytes: 1153485840.17
num_examples: 2385
- name: resample.3
num_bytes: 1153485850.17
num_examples: 2385
- name: resample.4
num_bytes: 1153485882.17
num_examples: 2385
- name: lowpass.1
num_bytes: 1153485830.17
num_examples: 2385
- name: lowpass.2
num_bytes: 1153485830.17
num_examples: 2385
- name: lowpass.3
num_bytes: 1153485830.17
num_examples: 2385
- name: lowpass.4
num_bytes: 1153485830.17
num_examples: 2385
- name: highpass.1
num_bytes: 1153485830.17
num_examples: 2385
- name: highpass.2
num_bytes: 1153485830.17
num_examples: 2385
- name: highpass.3
num_bytes: 1153485830.17
num_examples: 2385
- name: highpass.4
num_bytes: 1153485830.17
num_examples: 2385
- name: gnoise.1
num_bytes: 1153485822.49
num_examples: 2385
- name: gnoise.2
num_bytes: 1153485822.49
num_examples: 2385
- name: gnoise.3
num_bytes: 1153485822.49
num_examples: 2385
- name: gnoise.4
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_esc50.1
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_esc50.2
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_esc50.3
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_esc50.4
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_musan.1
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_musan.2
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_musan.3
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_musan.4
num_bytes: 1153485822.49
num_examples: 2385
- name: music.1
num_bytes: 1153485822.49
num_examples: 2385
- name: music.2
num_bytes: 1153485822.49
num_examples: 2385
- name: music.3
num_bytes: 1153485822.49
num_examples: 2385
- name: music.4
num_bytes: 1153485822.49
num_examples: 2385
- name: crosstalk.1
num_bytes: 1153485822.49
num_examples: 2385
- name: crosstalk.2
num_bytes: 1153485822.49
num_examples: 2385
- name: crosstalk.3
num_bytes: 1153485822.49
num_examples: 2385
- name: crosstalk.4
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_wham.1
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_wham.2
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_wham.3
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_wham.4
num_bytes: 1153485822.49
num_examples: 2385
- name: tremolo.1
num_bytes: 1153485822.49
num_examples: 2385
- name: tremolo.2
num_bytes: 1153485822.49
num_examples: 2385
- name: tremolo.4
num_bytes: 1153485822.49
num_examples: 2385
- name: treble.1
num_bytes: 1153485822.49
num_examples: 2385
- name: treble.2
num_bytes: 1153485822.49
num_examples: 2385
- name: treble.3
num_bytes: 1153485822.49
num_examples: 2385
- name: treble.4
num_bytes: 1153485822.49
num_examples: 2385
- name: bass.1
num_bytes: 1153485822.49
num_examples: 2385
- name: bass.2
num_bytes: 1153485822.49
num_examples: 2385
- name: bass.3
num_bytes: 1153485822.49
num_examples: 2385
- name: bass.4
num_bytes: 1153485822.49
num_examples: 2385
- name: chorus.1
num_bytes: 1156691262.49
num_examples: 2385
- name: chorus.2
num_bytes: 1158217662.49
num_examples: 2385
- name: chorus.3
num_bytes: 1159744062.49
num_examples: 2385
- name: chorus.4
num_bytes: 1161270462.49
num_examples: 2385
- name: tremolo.3
num_bytes: 1153485822.49
num_examples: 2385
- name: voice_conversion_bark.1
num_bytes: 1457427139.875
num_examples: 2385
download_size: 119056891470
dataset_size: 114748819328.10516
- config_name: multilingual_librispeech-spanish_test_pertEval_500_30
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: pert_idx
dtype: int64
splits:
- name: gnoise.1
num_bytes: 7341021960.0
num_examples: 15000
- name: env_noise_esc50.1
num_bytes: 7341021960.0
num_examples: 15000
download_size: 14645523867
dataset_size: 14682043920.0
- config_name: tedlium-release3_test
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: gender
dtype:
class_label:
names:
'0': unknown
'1': female
'2': male
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: None.0
num_bytes: 277376247.9680054
num_examples: 1155
- name: speedup.1
num_bytes: 221990159.49965963
num_examples: 1155
- name: speedup.2
num_bytes: 185066240.47311097
num_examples: 1155
- name: speedup.3
num_bytes: 158691929.4792376
num_examples: 1155
- name: slowdown.1
num_bytes: 316938966.95371
num_examples: 1155
- name: slowdown.2
num_bytes: 369687787.0762423
num_examples: 1155
- name: slowdown.3
num_bytes: 443535996.23893803
num_examples: 1155
- name: pitch_up.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_up.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_up.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_down.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_down.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_down.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: rir.1
num_bytes: 313788218.1586113
num_examples: 1155
- name: rir.2
num_bytes: 330268000.32334924
num_examples: 1155
- name: rir.3
num_bytes: 336608313.46153843
num_examples: 1155
- name: voice_conversion_vctk.1
num_bytes: 216990920.87134105
num_examples: 1155
- name: resample.1
num_bytes: 277376301.4329476
num_examples: 1155
- name: resample.2
num_bytes: 277376301.4329476
num_examples: 1155
- name: resample.3
num_bytes: 277376354.89788973
num_examples: 1155
- name: gain.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: gain.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: gain.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: echo.1
num_bytes: 281996247.9680054
num_examples: 1155
- name: echo.2
num_bytes: 286616247.9680054
num_examples: 1155
- name: echo.3
num_bytes: 295856247.9680054
num_examples: 1155
- name: phaser.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: phaser.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: phaser.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: tempo_up.1
num_bytes: 221989786.81756297
num_examples: 1155
- name: tempo_up.2
num_bytes: 185065496.68141592
num_examples: 1155
- name: tempo_up.3
num_bytes: 158690987.55275697
num_examples: 1155
- name: tempo_down.1
num_bytes: 316938020.3097345
num_examples: 1155
- name: tempo_down.2
num_bytes: 369686999.254595
num_examples: 1155
- name: tempo_down.3
num_bytes: 443535631.41933286
num_examples: 1155
- name: lowpass.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: lowpass.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: lowpass.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: highpass.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: highpass.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: highpass.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: speedup.4
num_bytes: 138910125.75561607
num_examples: 1155
- name: slowdown.4
num_bytes: 554308545.8577263
num_examples: 1155
- name: pitch_up.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_down.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: rir.4
num_bytes: 345514943.8223281
num_examples: 1155
- name: resample.4
num_bytes: 277376474.4077604
num_examples: 1155
- name: gain.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: echo.4
num_bytes: 314336247.9680054
num_examples: 1155
- name: phaser.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: tempo_up.4
num_bytes: 138910125.75561607
num_examples: 1155
- name: tempo_down.4
num_bytes: 554308545.8577263
num_examples: 1155
- name: lowpass.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: highpass.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: gnoise.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: gnoise.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: gnoise.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: music.1
num_bytes: 301958728.16
num_examples: 1155
- name: music.2
num_bytes: 301958728.16
num_examples: 1155
- name: music.3
num_bytes: 301958728.16
num_examples: 1155
- name: music.4
num_bytes: 301958728.16
num_examples: 1155
- name: crosstalk.1
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_esc50.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: env_noise_esc50.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: env_noise_esc50.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: gnoise.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: crosstalk.2
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_esc50.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: crosstalk.3
num_bytes: 301958728.16
num_examples: 1155
- name: crosstalk.4
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_musan.1
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_musan.2
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_musan.3
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_musan.4
num_bytes: 301958728.16
num_examples: 1155
- name: real_rir.1
num_bytes: 308750878.16
num_examples: 1155
- name: real_rir.2
num_bytes: 333286988.16
num_examples: 1155
- name: real_rir.3
num_bytes: 341205738.16
num_examples: 1155
- name: real_rir.4
num_bytes: 715155314.16
num_examples: 1155
- name: env_noise.1
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise.2
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise.3
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise.4
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_wham.1
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_wham.2
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_wham.3
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_wham.4
num_bytes: 301958728.16
num_examples: 1155
- name: tremolo.1
num_bytes: 301958728.16
num_examples: 1155
- name: tremolo.2
num_bytes: 301958728.16
num_examples: 1155
- name: tremolo.3
num_bytes: 301958728.16
num_examples: 1155
- name: tremolo.4
num_bytes: 301958728.16
num_examples: 1155
- name: treble.1
num_bytes: 301958728.16
num_examples: 1155
- name: treble.2
num_bytes: 301958728.16
num_examples: 1155
- name: treble.3
num_bytes: 301958728.16
num_examples: 1155
- name: treble.4
num_bytes: 301958728.16
num_examples: 1155
- name: bass.1
num_bytes: 301958728.16
num_examples: 1155
- name: bass.2
num_bytes: 301958728.16
num_examples: 1155
- name: bass.3
num_bytes: 301958728.16
num_examples: 1155
- name: bass.4
num_bytes: 301958728.16
num_examples: 1155
- name: chorus.1
num_bytes: 303511048.16
num_examples: 1155
- name: chorus.2
num_bytes: 304250248.16
num_examples: 1155
- name: chorus.4
num_bytes: 305728648.16
num_examples: 1155
- name: chorus.3
num_bytes: 304989448.16
num_examples: 1155
download_size: 58723208514
dataset_size: 30342709961.007984
configs:
- config_name: accented_cv
data_files:
- split: test
path: accented_cv/test-*
- split: test.clean
path: accented_cv/test.clean-*
- config_name: accented_cv_es
data_files:
- split: test
path: accented_cv_es/test-*
- config_name: accented_cv_fr
data_files:
- split: test
path: accented_cv_fr/test-*
- config_name: chime
data_files:
- split: farfield
path: chime/farfield-*
- split: nearfield
path: chime/nearfield-*
- config_name: in-the-wild
data_files:
- split: farfield
path: in-the-wild/farfield-*
- split: nearfield
path: in-the-wild/nearfield-*
- config_name: in-the-wild-AMI
data_files:
- split: nearfield
path: in-the-wild-AMI/nearfield-*
- split: farfield
path: in-the-wild-AMI/farfield-*
- config_name: in-the-wild-ami
data_files:
- split: nearfield
path: in-the-wild-ami/nearfield-*
- split: farfield
path: in-the-wild-ami/farfield-*
- config_name: librispeech_asr-test.clean
data_files:
- split: None.0
path: librispeech_asr-test.clean/None.0-*
- split: gnoise.1
path: librispeech_asr-test.clean/gnoise.1-*
- split: gnoise.2
path: librispeech_asr-test.clean/gnoise.2-*
- split: gnoise.3
path: librispeech_asr-test.clean/gnoise.3-*
- split: gnoise.4
path: librispeech_asr-test.clean/gnoise.4-*
- split: env_noise.1
path: librispeech_asr-test.clean/env_noise.1-*
- split: env_noise.2
path: librispeech_asr-test.clean/env_noise.2-*
- split: env_noise.3
path: librispeech_asr-test.clean/env_noise.3-*
- split: env_noise.4
path: librispeech_asr-test.clean/env_noise.4-*
- split: rir.1
path: librispeech_asr-test.clean/rir.1-*
- split: rir.2
path: librispeech_asr-test.clean/rir.2-*
- split: rir.3
path: librispeech_asr-test.clean/rir.3-*
- split: rir.4
path: librispeech_asr-test.clean/rir.4-*
- split: speedup.1
path: librispeech_asr-test.clean/speedup.1-*
- split: speedup.2
path: librispeech_asr-test.clean/speedup.2-*
- split: speedup.3
path: librispeech_asr-test.clean/speedup.3-*
- split: speedup.4
path: librispeech_asr-test.clean/speedup.4-*
- split: slowdown.1
path: librispeech_asr-test.clean/slowdown.1-*
- split: slowdown.2
path: librispeech_asr-test.clean/slowdown.2-*
- split: slowdown.3
path: librispeech_asr-test.clean/slowdown.3-*
- split: slowdown.4
path: librispeech_asr-test.clean/slowdown.4-*
- split: pitch_up.3
path: librispeech_asr-test.clean/pitch_up.3-*
- split: pitch_up.4
path: librispeech_asr-test.clean/pitch_up.4-*
- split: pitch_down.1
path: librispeech_asr-test.clean/pitch_down.1-*
- split: pitch_down.2
path: librispeech_asr-test.clean/pitch_down.2-*
- split: pitch_down.3
path: librispeech_asr-test.clean/pitch_down.3-*
- split: pitch_down.4
path: librispeech_asr-test.clean/pitch_down.4-*
- split: pitch_up.1
path: librispeech_asr-test.clean/pitch_up.1-*
- split: pitch_up.2
path: librispeech_asr-test.clean/pitch_up.2-*
- split: resample.1
path: librispeech_asr-test.clean/resample.1-*
- split: resample.2
path: librispeech_asr-test.clean/resample.2-*
- split: resample.3
path: librispeech_asr-test.clean/resample.3-*
- split: resample.4
path: librispeech_asr-test.clean/resample.4-*
- split: env_noise_esc50.1
path: librispeech_asr-test.clean/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: librispeech_asr-test.clean/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: librispeech_asr-test.clean/env_noise_esc50.3-*
- split: env_noise_esc50.4
path: librispeech_asr-test.clean/env_noise_esc50.4-*
- split: voice_conversion.4
path: librispeech_asr-test.clean/voice_conversion.4-*
- split: voice_conversion.3
path: librispeech_asr-test.clean/voice_conversion.3-*
- split: voice_conversion.1
path: librispeech_asr-test.clean/voice_conversion.1-*
- split: voice_conversion.2
path: librispeech_asr-test.clean/voice_conversion.2-*
- split: gain.1
path: librispeech_asr-test.clean/gain.1-*
- split: gain.2
path: librispeech_asr-test.clean/gain.2-*
- split: gain.3
path: librispeech_asr-test.clean/gain.3-*
- split: echo.1
path: librispeech_asr-test.clean/echo.1-*
- split: echo.2
path: librispeech_asr-test.clean/echo.2-*
- split: echo.3
path: librispeech_asr-test.clean/echo.3-*
- split: echo.4
path: librispeech_asr-test.clean/echo.4-*
- split: phaser.1
path: librispeech_asr-test.clean/phaser.1-*
- split: phaser.2
path: librispeech_asr-test.clean/phaser.2-*
- split: phaser.3
path: librispeech_asr-test.clean/phaser.3-*
- split: tempo_up.1
path: librispeech_asr-test.clean/tempo_up.1-*
- split: tempo_up.2
path: librispeech_asr-test.clean/tempo_up.2-*
- split: tempo_up.3
path: librispeech_asr-test.clean/tempo_up.3-*
- split: tempo_up.4
path: librispeech_asr-test.clean/tempo_up.4-*
- split: tempo_down.1
path: librispeech_asr-test.clean/tempo_down.1-*
- split: tempo_down.2
path: librispeech_asr-test.clean/tempo_down.2-*
- split: tempo_down.3
path: librispeech_asr-test.clean/tempo_down.3-*
- split: tempo_down.4
path: librispeech_asr-test.clean/tempo_down.4-*
- split: gain.4
path: librispeech_asr-test.clean/gain.4-*
- split: lowpass.1
path: librispeech_asr-test.clean/lowpass.1-*
- split: lowpass.2
path: librispeech_asr-test.clean/lowpass.2-*
- split: lowpass.3
path: librispeech_asr-test.clean/lowpass.3-*
- split: lowpass.4
path: librispeech_asr-test.clean/lowpass.4-*
- split: highpass.1
path: librispeech_asr-test.clean/highpass.1-*
- split: highpass.2
path: librispeech_asr-test.clean/highpass.2-*
- split: highpass.3
path: librispeech_asr-test.clean/highpass.3-*
- split: highpass.4
path: librispeech_asr-test.clean/highpass.4-*
- split: phaser.4
path: librispeech_asr-test.clean/phaser.4-*
- split: voice_conversion_vctk.1
path: librispeech_asr-test.clean/voice_conversion_vctk.1-*
- split: universal_adv.1
path: librispeech_asr-test.clean/universal_adv.1-*
- split: music.1
path: librispeech_asr-test.clean/music.1-*
- split: music.2
path: librispeech_asr-test.clean/music.2-*
- split: music.3
path: librispeech_asr-test.clean/music.3-*
- split: music.4
path: librispeech_asr-test.clean/music.4-*
- split: crosstalk.1
path: librispeech_asr-test.clean/crosstalk.1-*
- split: crosstalk.2
path: librispeech_asr-test.clean/crosstalk.2-*
- split: crosstalk.3
path: librispeech_asr-test.clean/crosstalk.3-*
- split: crosstalk.4
path: librispeech_asr-test.clean/crosstalk.4-*
- split: env_noise_musan.1
path: librispeech_asr-test.clean/env_noise_musan.1-*
- split: env_noise_musan.2
path: librispeech_asr-test.clean/env_noise_musan.2-*
- split: env_noise_musan.3
path: librispeech_asr-test.clean/env_noise_musan.3-*
- split: env_noise_musan.4
path: librispeech_asr-test.clean/env_noise_musan.4-*
- split: real_rir.1
path: librispeech_asr-test.clean/real_rir.1-*
- split: real_rir.2
path: librispeech_asr-test.clean/real_rir.2-*
- split: real_rir.3
path: librispeech_asr-test.clean/real_rir.3-*
- split: real_rir.4
path: librispeech_asr-test.clean/real_rir.4-*
- split: env_noise_wham.1
path: librispeech_asr-test.clean/env_noise_wham.1-*
- split: env_noise_wham.2
path: librispeech_asr-test.clean/env_noise_wham.2-*
- split: env_noise_wham.3
path: librispeech_asr-test.clean/env_noise_wham.3-*
- split: env_noise_wham.4
path: librispeech_asr-test.clean/env_noise_wham.4-*
- split: tremolo.1
path: librispeech_asr-test.clean/tremolo.1-*
- split: tremolo.2
path: librispeech_asr-test.clean/tremolo.2-*
- split: tremolo.3
path: librispeech_asr-test.clean/tremolo.3-*
- split: tremolo.4
path: librispeech_asr-test.clean/tremolo.4-*
- split: treble.1
path: librispeech_asr-test.clean/treble.1-*
- split: treble.2
path: librispeech_asr-test.clean/treble.2-*
- split: treble.3
path: librispeech_asr-test.clean/treble.3-*
- split: treble.4
path: librispeech_asr-test.clean/treble.4-*
- split: bass.1
path: librispeech_asr-test.clean/bass.1-*
- split: bass.2
path: librispeech_asr-test.clean/bass.2-*
- split: bass.3
path: librispeech_asr-test.clean/bass.3-*
- split: bass.4
path: librispeech_asr-test.clean/bass.4-*
- split: chorus.1
path: librispeech_asr-test.clean/chorus.1-*
- split: chorus.2
path: librispeech_asr-test.clean/chorus.2-*
- split: chorus.3
path: librispeech_asr-test.clean/chorus.3-*
- split: chorus.4
path: librispeech_asr-test.clean/chorus.4-*
- config_name: librispeech_asr-test.clean_pertEval_500_30
data_files:
- split: gnoise.1
path: librispeech_asr-test.clean_pertEval_500_30/gnoise.1-*
- split: env_noise_esc50.1
path: librispeech_asr-test.clean_pertEval_500_30/env_noise_esc50.1-*
- config_name: multilingual_librispeech-french_test
data_files:
- split: gnoise.1
path: multilingual_librispeech-french_test/gnoise.1-*
- split: gnoise.2
path: multilingual_librispeech-french_test/gnoise.2-*
- split: gnoise.3
path: multilingual_librispeech-french_test/gnoise.3-*
- split: speedup.1
path: multilingual_librispeech-french_test/speedup.1-*
- split: speedup.2
path: multilingual_librispeech-french_test/speedup.2-*
- split: speedup.3
path: multilingual_librispeech-french_test/speedup.3-*
- split: slowdown.1
path: multilingual_librispeech-french_test/slowdown.1-*
- split: slowdown.2
path: multilingual_librispeech-french_test/slowdown.2-*
- split: slowdown.3
path: multilingual_librispeech-french_test/slowdown.3-*
- split: pitch_up.1
path: multilingual_librispeech-french_test/pitch_up.1-*
- split: pitch_up.2
path: multilingual_librispeech-french_test/pitch_up.2-*
- split: pitch_up.3
path: multilingual_librispeech-french_test/pitch_up.3-*
- split: pitch_down.1
path: multilingual_librispeech-french_test/pitch_down.1-*
- split: pitch_down.2
path: multilingual_librispeech-french_test/pitch_down.2-*
- split: env_noise.1
path: multilingual_librispeech-french_test/env_noise.1-*
- split: env_noise.3
path: multilingual_librispeech-french_test/env_noise.3-*
- split: env_noise_wham.1
path: multilingual_librispeech-french_test/env_noise_wham.1-*
- split: env_noise_wham.2
path: multilingual_librispeech-french_test/env_noise_wham.2-*
- split: real_rir.3
path: multilingual_librispeech-french_test/real_rir.3-*
- split: env_noise.2
path: multilingual_librispeech-french_test/env_noise.2-*
- split: env_noise_esc50.1
path: multilingual_librispeech-french_test/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: multilingual_librispeech-french_test/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: multilingual_librispeech-french_test/env_noise_esc50.3-*
- split: env_noise_musan.1
path: multilingual_librispeech-french_test/env_noise_musan.1-*
- split: env_noise_musan.2
path: multilingual_librispeech-french_test/env_noise_musan.2-*
- split: env_noise_musan.3
path: multilingual_librispeech-french_test/env_noise_musan.3-*
- split: env_noise_wham.3
path: multilingual_librispeech-french_test/env_noise_wham.3-*
- split: pitch_down.3
path: multilingual_librispeech-french_test/pitch_down.3-*
- split: rir.1
path: multilingual_librispeech-french_test/rir.1-*
- split: rir.2
path: multilingual_librispeech-french_test/rir.2-*
- split: rir.3
path: multilingual_librispeech-french_test/rir.3-*
- split: real_rir.1
path: multilingual_librispeech-french_test/real_rir.1-*
- split: real_rir.2
path: multilingual_librispeech-french_test/real_rir.2-*
- split: resample.1
path: multilingual_librispeech-french_test/resample.1-*
- split: resample.2
path: multilingual_librispeech-french_test/resample.2-*
- split: resample.3
path: multilingual_librispeech-french_test/resample.3-*
- split: gain.1
path: multilingual_librispeech-french_test/gain.1-*
- split: gain.2
path: multilingual_librispeech-french_test/gain.2-*
- split: gain.3
path: multilingual_librispeech-french_test/gain.3-*
- split: echo.1
path: multilingual_librispeech-french_test/echo.1-*
- split: echo.2
path: multilingual_librispeech-french_test/echo.2-*
- split: echo.3
path: multilingual_librispeech-french_test/echo.3-*
- split: phaser.1
path: multilingual_librispeech-french_test/phaser.1-*
- split: phaser.2
path: multilingual_librispeech-french_test/phaser.2-*
- split: phaser.3
path: multilingual_librispeech-french_test/phaser.3-*
- split: tempo_up.1
path: multilingual_librispeech-french_test/tempo_up.1-*
- split: tempo_up.2
path: multilingual_librispeech-french_test/tempo_up.2-*
- split: tempo_up.3
path: multilingual_librispeech-french_test/tempo_up.3-*
- split: tempo_down.1
path: multilingual_librispeech-french_test/tempo_down.1-*
- split: tempo_down.2
path: multilingual_librispeech-french_test/tempo_down.2-*
- split: tempo_down.3
path: multilingual_librispeech-french_test/tempo_down.3-*
- split: lowpass.1
path: multilingual_librispeech-french_test/lowpass.1-*
- split: lowpass.2
path: multilingual_librispeech-french_test/lowpass.2-*
- split: lowpass.3
path: multilingual_librispeech-french_test/lowpass.3-*
- split: highpass.1
path: multilingual_librispeech-french_test/highpass.1-*
- split: highpass.2
path: multilingual_librispeech-french_test/highpass.2-*
- split: highpass.3
path: multilingual_librispeech-french_test/highpass.3-*
- split: music.1
path: multilingual_librispeech-french_test/music.1-*
- split: music.2
path: multilingual_librispeech-french_test/music.2-*
- split: music.3
path: multilingual_librispeech-french_test/music.3-*
- split: crosstalk.1
path: multilingual_librispeech-french_test/crosstalk.1-*
- split: crosstalk.2
path: multilingual_librispeech-french_test/crosstalk.2-*
- split: crosstalk.3
path: multilingual_librispeech-french_test/crosstalk.3-*
- split: tremolo.1
path: multilingual_librispeech-french_test/tremolo.1-*
- split: tremolo.2
path: multilingual_librispeech-french_test/tremolo.2-*
- split: tremolo.3
path: multilingual_librispeech-french_test/tremolo.3-*
- split: treble.1
path: multilingual_librispeech-french_test/treble.1-*
- split: treble.2
path: multilingual_librispeech-french_test/treble.2-*
- split: treble.3
path: multilingual_librispeech-french_test/treble.3-*
- split: bass.1
path: multilingual_librispeech-french_test/bass.1-*
- split: bass.2
path: multilingual_librispeech-french_test/bass.2-*
- split: bass.3
path: multilingual_librispeech-french_test/bass.3-*
- split: chorus.1
path: multilingual_librispeech-french_test/chorus.1-*
- split: chorus.2
path: multilingual_librispeech-french_test/chorus.2-*
- split: chorus.3
path: multilingual_librispeech-french_test/chorus.3-*
- split: gnoise.4
path: multilingual_librispeech-french_test/gnoise.4-*
- split: env_noise.4
path: multilingual_librispeech-french_test/env_noise.4-*
- split: env_noise_esc50.4
path: multilingual_librispeech-french_test/env_noise_esc50.4-*
- split: env_noise_musan.4
path: multilingual_librispeech-french_test/env_noise_musan.4-*
- split: env_noise_wham.4
path: multilingual_librispeech-french_test/env_noise_wham.4-*
- split: speedup.4
path: multilingual_librispeech-french_test/speedup.4-*
- split: slowdown.4
path: multilingual_librispeech-french_test/slowdown.4-*
- split: pitch_up.4
path: multilingual_librispeech-french_test/pitch_up.4-*
- split: pitch_down.4
path: multilingual_librispeech-french_test/pitch_down.4-*
- split: rir.4
path: multilingual_librispeech-french_test/rir.4-*
- split: real_rir.4
path: multilingual_librispeech-french_test/real_rir.4-*
- split: resample.4
path: multilingual_librispeech-french_test/resample.4-*
- split: gain.4
path: multilingual_librispeech-french_test/gain.4-*
- split: echo.4
path: multilingual_librispeech-french_test/echo.4-*
- split: phaser.4
path: multilingual_librispeech-french_test/phaser.4-*
- split: tempo_up.4
path: multilingual_librispeech-french_test/tempo_up.4-*
- split: tempo_down.4
path: multilingual_librispeech-french_test/tempo_down.4-*
- split: lowpass.4
path: multilingual_librispeech-french_test/lowpass.4-*
- split: highpass.4
path: multilingual_librispeech-french_test/highpass.4-*
- split: music.4
path: multilingual_librispeech-french_test/music.4-*
- split: crosstalk.4
path: multilingual_librispeech-french_test/crosstalk.4-*
- split: tremolo.4
path: multilingual_librispeech-french_test/tremolo.4-*
- split: treble.4
path: multilingual_librispeech-french_test/treble.4-*
- split: bass.4
path: multilingual_librispeech-french_test/bass.4-*
- split: chorus.4
path: multilingual_librispeech-french_test/chorus.4-*
- config_name: multilingual_librispeech-german_test
data_files:
- split: gnoise.1
path: multilingual_librispeech-german_test/gnoise.1-*
- split: gnoise.2
path: multilingual_librispeech-german_test/gnoise.2-*
- split: gnoise.3
path: multilingual_librispeech-german_test/gnoise.3-*
- split: env_noise.1
path: multilingual_librispeech-german_test/env_noise.1-*
- split: env_noise.2
path: multilingual_librispeech-german_test/env_noise.2-*
- split: env_noise.3
path: multilingual_librispeech-german_test/env_noise.3-*
- split: env_noise_esc50.1
path: multilingual_librispeech-german_test/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: multilingual_librispeech-german_test/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: multilingual_librispeech-german_test/env_noise_esc50.3-*
- split: env_noise_musan.1
path: multilingual_librispeech-german_test/env_noise_musan.1-*
- split: env_noise_musan.2
path: multilingual_librispeech-german_test/env_noise_musan.2-*
- split: env_noise_musan.3
path: multilingual_librispeech-german_test/env_noise_musan.3-*
- split: env_noise_wham.1
path: multilingual_librispeech-german_test/env_noise_wham.1-*
- split: env_noise_wham.2
path: multilingual_librispeech-german_test/env_noise_wham.2-*
- split: env_noise_wham.3
path: multilingual_librispeech-german_test/env_noise_wham.3-*
- split: speedup.1
path: multilingual_librispeech-german_test/speedup.1-*
- split: speedup.2
path: multilingual_librispeech-german_test/speedup.2-*
- split: speedup.3
path: multilingual_librispeech-german_test/speedup.3-*
- split: slowdown.1
path: multilingual_librispeech-german_test/slowdown.1-*
- split: slowdown.2
path: multilingual_librispeech-german_test/slowdown.2-*
- split: slowdown.3
path: multilingual_librispeech-german_test/slowdown.3-*
- split: pitch_up.1
path: multilingual_librispeech-german_test/pitch_up.1-*
- split: pitch_up.2
path: multilingual_librispeech-german_test/pitch_up.2-*
- split: pitch_up.3
path: multilingual_librispeech-german_test/pitch_up.3-*
- split: pitch_down.1
path: multilingual_librispeech-german_test/pitch_down.1-*
- split: pitch_down.2
path: multilingual_librispeech-german_test/pitch_down.2-*
- split: pitch_down.3
path: multilingual_librispeech-german_test/pitch_down.3-*
- split: rir.1
path: multilingual_librispeech-german_test/rir.1-*
- split: rir.2
path: multilingual_librispeech-german_test/rir.2-*
- split: rir.3
path: multilingual_librispeech-german_test/rir.3-*
- split: real_rir.1
path: multilingual_librispeech-german_test/real_rir.1-*
- split: real_rir.2
path: multilingual_librispeech-german_test/real_rir.2-*
- split: real_rir.3
path: multilingual_librispeech-german_test/real_rir.3-*
- split: resample.1
path: multilingual_librispeech-german_test/resample.1-*
- split: resample.2
path: multilingual_librispeech-german_test/resample.2-*
- split: resample.3
path: multilingual_librispeech-german_test/resample.3-*
- split: gain.1
path: multilingual_librispeech-german_test/gain.1-*
- split: gain.2
path: multilingual_librispeech-german_test/gain.2-*
- split: gain.3
path: multilingual_librispeech-german_test/gain.3-*
- split: echo.1
path: multilingual_librispeech-german_test/echo.1-*
- split: echo.2
path: multilingual_librispeech-german_test/echo.2-*
- split: echo.3
path: multilingual_librispeech-german_test/echo.3-*
- split: phaser.1
path: multilingual_librispeech-german_test/phaser.1-*
- split: phaser.2
path: multilingual_librispeech-german_test/phaser.2-*
- split: phaser.3
path: multilingual_librispeech-german_test/phaser.3-*
- split: tempo_up.1
path: multilingual_librispeech-german_test/tempo_up.1-*
- split: tempo_up.2
path: multilingual_librispeech-german_test/tempo_up.2-*
- split: tempo_up.3
path: multilingual_librispeech-german_test/tempo_up.3-*
- split: tempo_down.1
path: multilingual_librispeech-german_test/tempo_down.1-*
- split: tempo_down.2
path: multilingual_librispeech-german_test/tempo_down.2-*
- split: tempo_down.3
path: multilingual_librispeech-german_test/tempo_down.3-*
- split: lowpass.1
path: multilingual_librispeech-german_test/lowpass.1-*
- split: lowpass.2
path: multilingual_librispeech-german_test/lowpass.2-*
- split: lowpass.3
path: multilingual_librispeech-german_test/lowpass.3-*
- split: highpass.1
path: multilingual_librispeech-german_test/highpass.1-*
- split: highpass.2
path: multilingual_librispeech-german_test/highpass.2-*
- split: highpass.3
path: multilingual_librispeech-german_test/highpass.3-*
- split: music.1
path: multilingual_librispeech-german_test/music.1-*
- split: music.2
path: multilingual_librispeech-german_test/music.2-*
- split: music.3
path: multilingual_librispeech-german_test/music.3-*
- split: crosstalk.1
path: multilingual_librispeech-german_test/crosstalk.1-*
- split: crosstalk.2
path: multilingual_librispeech-german_test/crosstalk.2-*
- split: crosstalk.3
path: multilingual_librispeech-german_test/crosstalk.3-*
- split: tremolo.1
path: multilingual_librispeech-german_test/tremolo.1-*
- split: tremolo.2
path: multilingual_librispeech-german_test/tremolo.2-*
- split: tremolo.3
path: multilingual_librispeech-german_test/tremolo.3-*
- split: treble.1
path: multilingual_librispeech-german_test/treble.1-*
- split: treble.2
path: multilingual_librispeech-german_test/treble.2-*
- split: treble.3
path: multilingual_librispeech-german_test/treble.3-*
- split: bass.1
path: multilingual_librispeech-german_test/bass.1-*
- split: bass.2
path: multilingual_librispeech-german_test/bass.2-*
- split: bass.3
path: multilingual_librispeech-german_test/bass.3-*
- split: chorus.1
path: multilingual_librispeech-german_test/chorus.1-*
- split: chorus.2
path: multilingual_librispeech-german_test/chorus.2-*
- split: chorus.3
path: multilingual_librispeech-german_test/chorus.3-*
- split: gnoise.4
path: multilingual_librispeech-german_test/gnoise.4-*
- split: env_noise.4
path: multilingual_librispeech-german_test/env_noise.4-*
- split: env_noise_esc50.4
path: multilingual_librispeech-german_test/env_noise_esc50.4-*
- split: env_noise_musan.4
path: multilingual_librispeech-german_test/env_noise_musan.4-*
- split: env_noise_wham.4
path: multilingual_librispeech-german_test/env_noise_wham.4-*
- split: speedup.4
path: multilingual_librispeech-german_test/speedup.4-*
- split: slowdown.4
path: multilingual_librispeech-german_test/slowdown.4-*
- split: pitch_up.4
path: multilingual_librispeech-german_test/pitch_up.4-*
- split: pitch_down.4
path: multilingual_librispeech-german_test/pitch_down.4-*
- split: rir.4
path: multilingual_librispeech-german_test/rir.4-*
- split: real_rir.4
path: multilingual_librispeech-german_test/real_rir.4-*
- split: resample.4
path: multilingual_librispeech-german_test/resample.4-*
- split: gain.4
path: multilingual_librispeech-german_test/gain.4-*
- split: echo.4
path: multilingual_librispeech-german_test/echo.4-*
- split: phaser.4
path: multilingual_librispeech-german_test/phaser.4-*
- split: tempo_up.4
path: multilingual_librispeech-german_test/tempo_up.4-*
- split: tempo_down.4
path: multilingual_librispeech-german_test/tempo_down.4-*
- split: lowpass.4
path: multilingual_librispeech-german_test/lowpass.4-*
- split: highpass.4
path: multilingual_librispeech-german_test/highpass.4-*
- split: music.4
path: multilingual_librispeech-german_test/music.4-*
- split: crosstalk.4
path: multilingual_librispeech-german_test/crosstalk.4-*
- split: tremolo.4
path: multilingual_librispeech-german_test/tremolo.4-*
- split: treble.4
path: multilingual_librispeech-german_test/treble.4-*
- split: bass.4
path: multilingual_librispeech-german_test/bass.4-*
- split: chorus.4
path: multilingual_librispeech-german_test/chorus.4-*
- config_name: multilingual_librispeech-spanish_test
data_files:
- split: None.0
path: multilingual_librispeech-spanish_test/None.0-*
- split: gnoise.1
path: multilingual_librispeech-spanish_test/gnoise.1-*
- split: gnoise.2
path: multilingual_librispeech-spanish_test/gnoise.2-*
- split: gnoise.3
path: multilingual_librispeech-spanish_test/gnoise.3-*
- split: gnoise.4
path: multilingual_librispeech-spanish_test/gnoise.4-*
- split: env_noise.1
path: multilingual_librispeech-spanish_test/env_noise.1-*
- split: env_noise.2
path: multilingual_librispeech-spanish_test/env_noise.2-*
- split: env_noise.3
path: multilingual_librispeech-spanish_test/env_noise.3-*
- split: env_noise.4
path: multilingual_librispeech-spanish_test/env_noise.4-*
- split: rir.1
path: multilingual_librispeech-spanish_test/rir.1-*
- split: rir.2
path: multilingual_librispeech-spanish_test/rir.2-*
- split: rir.3
path: multilingual_librispeech-spanish_test/rir.3-*
- split: rir.4
path: multilingual_librispeech-spanish_test/rir.4-*
- split: speedup.1
path: multilingual_librispeech-spanish_test/speedup.1-*
- split: speedup.2
path: multilingual_librispeech-spanish_test/speedup.2-*
- split: speedup.3
path: multilingual_librispeech-spanish_test/speedup.3-*
- split: speedup.4
path: multilingual_librispeech-spanish_test/speedup.4-*
- split: slowdown.1
path: multilingual_librispeech-spanish_test/slowdown.1-*
- split: slowdown.2
path: multilingual_librispeech-spanish_test/slowdown.2-*
- split: slowdown.3
path: multilingual_librispeech-spanish_test/slowdown.3-*
- split: slowdown.4
path: multilingual_librispeech-spanish_test/slowdown.4-*
- split: pitch_up.3
path: multilingual_librispeech-spanish_test/pitch_up.3-*
- split: pitch_up.4
path: multilingual_librispeech-spanish_test/pitch_up.4-*
- split: pitch_down.1
path: multilingual_librispeech-spanish_test/pitch_down.1-*
- split: pitch_down.2
path: multilingual_librispeech-spanish_test/pitch_down.2-*
- split: pitch_down.3
path: multilingual_librispeech-spanish_test/pitch_down.3-*
- split: pitch_down.4
path: multilingual_librispeech-spanish_test/pitch_down.4-*
- split: pitch_up.1
path: multilingual_librispeech-spanish_test/pitch_up.1-*
- split: pitch_up.2
path: multilingual_librispeech-spanish_test/pitch_up.2-*
- split: resample.2
path: multilingual_librispeech-spanish_test/resample.2-*
- split: resample.3
path: multilingual_librispeech-spanish_test/resample.3-*
- split: resample.4
path: multilingual_librispeech-spanish_test/resample.4-*
- split: env_noise_esc50.1
path: multilingual_librispeech-spanish_test/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: multilingual_librispeech-spanish_test/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: multilingual_librispeech-spanish_test/env_noise_esc50.3-*
- split: env_noise_esc50.4
path: multilingual_librispeech-spanish_test/env_noise_esc50.4-*
- split: resample.1
path: multilingual_librispeech-spanish_test/resample.1-*
- split: gain.1
path: multilingual_librispeech-spanish_test/gain.1-*
- split: gain.2
path: multilingual_librispeech-spanish_test/gain.2-*
- split: gain.3
path: multilingual_librispeech-spanish_test/gain.3-*
- split: gain.4
path: multilingual_librispeech-spanish_test/gain.4-*
- split: echo.4
path: multilingual_librispeech-spanish_test/echo.4-*
- split: echo.1
path: multilingual_librispeech-spanish_test/echo.1-*
- split: echo.2
path: multilingual_librispeech-spanish_test/echo.2-*
- split: echo.3
path: multilingual_librispeech-spanish_test/echo.3-*
- split: tempo_up.1
path: multilingual_librispeech-spanish_test/tempo_up.1-*
- split: tempo_up.2
path: multilingual_librispeech-spanish_test/tempo_up.2-*
- split: tempo_up.3
path: multilingual_librispeech-spanish_test/tempo_up.3-*
- split: tempo_up.4
path: multilingual_librispeech-spanish_test/tempo_up.4-*
- split: tempo_down.1
path: multilingual_librispeech-spanish_test/tempo_down.1-*
- split: tempo_down.2
path: multilingual_librispeech-spanish_test/tempo_down.2-*
- split: tempo_down.3
path: multilingual_librispeech-spanish_test/tempo_down.3-*
- split: tempo_down.4
path: multilingual_librispeech-spanish_test/tempo_down.4-*
- split: lowpass.1
path: multilingual_librispeech-spanish_test/lowpass.1-*
- split: lowpass.2
path: multilingual_librispeech-spanish_test/lowpass.2-*
- split: lowpass.3
path: multilingual_librispeech-spanish_test/lowpass.3-*
- split: lowpass.4
path: multilingual_librispeech-spanish_test/lowpass.4-*
- split: highpass.1
path: multilingual_librispeech-spanish_test/highpass.1-*
- split: highpass.2
path: multilingual_librispeech-spanish_test/highpass.2-*
- split: highpass.3
path: multilingual_librispeech-spanish_test/highpass.3-*
- split: highpass.4
path: multilingual_librispeech-spanish_test/highpass.4-*
- split: phaser.1
path: multilingual_librispeech-spanish_test/phaser.1-*
- split: phaser.2
path: multilingual_librispeech-spanish_test/phaser.2-*
- split: phaser.3
path: multilingual_librispeech-spanish_test/phaser.3-*
- split: phaser.4
path: multilingual_librispeech-spanish_test/phaser.4-*
- split: env_noise_musan.1
path: multilingual_librispeech-spanish_test/env_noise_musan.1-*
- split: env_noise_musan.2
path: multilingual_librispeech-spanish_test/env_noise_musan.2-*
- split: env_noise_musan.3
path: multilingual_librispeech-spanish_test/env_noise_musan.3-*
- split: env_noise_musan.4
path: multilingual_librispeech-spanish_test/env_noise_musan.4-*
- split: music.1
path: multilingual_librispeech-spanish_test/music.1-*
- split: music.2
path: multilingual_librispeech-spanish_test/music.2-*
- split: music.3
path: multilingual_librispeech-spanish_test/music.3-*
- split: music.4
path: multilingual_librispeech-spanish_test/music.4-*
- split: crosstalk.1
path: multilingual_librispeech-spanish_test/crosstalk.1-*
- split: crosstalk.2
path: multilingual_librispeech-spanish_test/crosstalk.2-*
- split: crosstalk.3
path: multilingual_librispeech-spanish_test/crosstalk.3-*
- split: crosstalk.4
path: multilingual_librispeech-spanish_test/crosstalk.4-*
- split: env_noise_wham.1
path: multilingual_librispeech-spanish_test/env_noise_wham.1-*
- split: env_noise_wham.2
path: multilingual_librispeech-spanish_test/env_noise_wham.2-*
- split: env_noise_wham.3
path: multilingual_librispeech-spanish_test/env_noise_wham.3-*
- split: env_noise_wham.4
path: multilingual_librispeech-spanish_test/env_noise_wham.4-*
- split: tremolo.1
path: multilingual_librispeech-spanish_test/tremolo.1-*
- split: tremolo.2
path: multilingual_librispeech-spanish_test/tremolo.2-*
- split: tremolo.4
path: multilingual_librispeech-spanish_test/tremolo.4-*
- split: treble.1
path: multilingual_librispeech-spanish_test/treble.1-*
- split: treble.2
path: multilingual_librispeech-spanish_test/treble.2-*
- split: treble.3
path: multilingual_librispeech-spanish_test/treble.3-*
- split: treble.4
path: multilingual_librispeech-spanish_test/treble.4-*
- split: bass.1
path: multilingual_librispeech-spanish_test/bass.1-*
- split: bass.2
path: multilingual_librispeech-spanish_test/bass.2-*
- split: bass.3
path: multilingual_librispeech-spanish_test/bass.3-*
- split: bass.4
path: multilingual_librispeech-spanish_test/bass.4-*
- split: chorus.1
path: multilingual_librispeech-spanish_test/chorus.1-*
- split: chorus.2
path: multilingual_librispeech-spanish_test/chorus.2-*
- split: chorus.3
path: multilingual_librispeech-spanish_test/chorus.3-*
- split: chorus.4
path: multilingual_librispeech-spanish_test/chorus.4-*
- split: tremolo.3
path: multilingual_librispeech-spanish_test/tremolo.3-*
- split: voice_conversion_bark.1
path: multilingual_librispeech-spanish_test/voice_conversion_bark.1-*
- config_name: multilingual_librispeech-spanish_test_pertEval_500_30
data_files:
- split: gnoise.1
path: multilingual_librispeech-spanish_test_pertEval_500_30/gnoise.1-*
- split: env_noise_esc50.1
path: multilingual_librispeech-spanish_test_pertEval_500_30/env_noise_esc50.1-*
- config_name: tedlium-release3_test
data_files:
- split: gnoise.1
path: tedlium-release3_test/gnoise.1-*
- split: gnoise.2
path: tedlium-release3_test/gnoise.2-*
- split: gnoise.3
path: tedlium-release3_test/gnoise.3-*
- split: env_noise_esc50.1
path: tedlium-release3_test/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: tedlium-release3_test/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: tedlium-release3_test/env_noise_esc50.3-*
- split: speedup.1
path: tedlium-release3_test/speedup.1-*
- split: speedup.2
path: tedlium-release3_test/speedup.2-*
- split: speedup.3
path: tedlium-release3_test/speedup.3-*
- split: slowdown.1
path: tedlium-release3_test/slowdown.1-*
- split: slowdown.2
path: tedlium-release3_test/slowdown.2-*
- split: slowdown.3
path: tedlium-release3_test/slowdown.3-*
- split: pitch_up.1
path: tedlium-release3_test/pitch_up.1-*
- split: pitch_up.2
path: tedlium-release3_test/pitch_up.2-*
- split: pitch_up.3
path: tedlium-release3_test/pitch_up.3-*
- split: pitch_down.1
path: tedlium-release3_test/pitch_down.1-*
- split: pitch_down.2
path: tedlium-release3_test/pitch_down.2-*
- split: pitch_down.3
path: tedlium-release3_test/pitch_down.3-*
- split: rir.1
path: tedlium-release3_test/rir.1-*
- split: rir.2
path: tedlium-release3_test/rir.2-*
- split: rir.3
path: tedlium-release3_test/rir.3-*
- split: voice_conversion_vctk.1
path: tedlium-release3_test/voice_conversion_vctk.1-*
- split: resample.1
path: tedlium-release3_test/resample.1-*
- split: resample.2
path: tedlium-release3_test/resample.2-*
- split: resample.3
path: tedlium-release3_test/resample.3-*
- split: gain.1
path: tedlium-release3_test/gain.1-*
- split: gain.2
path: tedlium-release3_test/gain.2-*
- split: gain.3
path: tedlium-release3_test/gain.3-*
- split: echo.1
path: tedlium-release3_test/echo.1-*
- split: echo.2
path: tedlium-release3_test/echo.2-*
- split: echo.3
path: tedlium-release3_test/echo.3-*
- split: phaser.1
path: tedlium-release3_test/phaser.1-*
- split: phaser.2
path: tedlium-release3_test/phaser.2-*
- split: phaser.3
path: tedlium-release3_test/phaser.3-*
- split: tempo_up.1
path: tedlium-release3_test/tempo_up.1-*
- split: tempo_up.2
path: tedlium-release3_test/tempo_up.2-*
- split: tempo_up.3
path: tedlium-release3_test/tempo_up.3-*
- split: tempo_down.1
path: tedlium-release3_test/tempo_down.1-*
- split: tempo_down.2
path: tedlium-release3_test/tempo_down.2-*
- split: tempo_down.3
path: tedlium-release3_test/tempo_down.3-*
- split: lowpass.1
path: tedlium-release3_test/lowpass.1-*
- split: lowpass.2
path: tedlium-release3_test/lowpass.2-*
- split: lowpass.3
path: tedlium-release3_test/lowpass.3-*
- split: highpass.1
path: tedlium-release3_test/highpass.1-*
- split: highpass.2
path: tedlium-release3_test/highpass.2-*
- split: highpass.3
path: tedlium-release3_test/highpass.3-*
- split: gnoise.4
path: tedlium-release3_test/gnoise.4-*
- split: env_noise_esc50.4
path: tedlium-release3_test/env_noise_esc50.4-*
- split: speedup.4
path: tedlium-release3_test/speedup.4-*
- split: slowdown.4
path: tedlium-release3_test/slowdown.4-*
- split: pitch_up.4
path: tedlium-release3_test/pitch_up.4-*
- split: pitch_down.4
path: tedlium-release3_test/pitch_down.4-*
- split: rir.4
path: tedlium-release3_test/rir.4-*
- split: resample.4
path: tedlium-release3_test/resample.4-*
- split: gain.4
path: tedlium-release3_test/gain.4-*
- split: echo.4
path: tedlium-release3_test/echo.4-*
- split: phaser.4
path: tedlium-release3_test/phaser.4-*
- split: tempo_up.4
path: tedlium-release3_test/tempo_up.4-*
- split: tempo_down.4
path: tedlium-release3_test/tempo_down.4-*
- split: lowpass.4
path: tedlium-release3_test/lowpass.4-*
- split: highpass.4
path: tedlium-release3_test/highpass.4-*
- split: None.0
path: tedlium-release3_test/None.0-*
- split: music.1
path: tedlium-release3_test/music.1-*
- split: music.2
path: tedlium-release3_test/music.2-*
- split: music.3
path: tedlium-release3_test/music.3-*
- split: music.4
path: tedlium-release3_test/music.4-*
- split: crosstalk.1
path: tedlium-release3_test/crosstalk.1-*
- split: crosstalk.2
path: tedlium-release3_test/crosstalk.2-*
- split: crosstalk.3
path: tedlium-release3_test/crosstalk.3-*
- split: crosstalk.4
path: tedlium-release3_test/crosstalk.4-*
- split: env_noise_musan.1
path: tedlium-release3_test/env_noise_musan.1-*
- split: env_noise_musan.2
path: tedlium-release3_test/env_noise_musan.2-*
- split: env_noise_musan.3
path: tedlium-release3_test/env_noise_musan.3-*
- split: env_noise_musan.4
path: tedlium-release3_test/env_noise_musan.4-*
- split: real_rir.1
path: tedlium-release3_test/real_rir.1-*
- split: real_rir.2
path: tedlium-release3_test/real_rir.2-*
- split: real_rir.3
path: tedlium-release3_test/real_rir.3-*
- split: real_rir.4
path: tedlium-release3_test/real_rir.4-*
- split: env_noise.1
path: tedlium-release3_test/env_noise.1-*
- split: env_noise.2
path: tedlium-release3_test/env_noise.2-*
- split: env_noise.3
path: tedlium-release3_test/env_noise.3-*
- split: env_noise.4
path: tedlium-release3_test/env_noise.4-*
- split: env_noise_wham.1
path: tedlium-release3_test/env_noise_wham.1-*
- split: env_noise_wham.2
path: tedlium-release3_test/env_noise_wham.2-*
- split: env_noise_wham.3
path: tedlium-release3_test/env_noise_wham.3-*
- split: env_noise_wham.4
path: tedlium-release3_test/env_noise_wham.4-*
- split: tremolo.1
path: tedlium-release3_test/tremolo.1-*
- split: tremolo.2
path: tedlium-release3_test/tremolo.2-*
- split: tremolo.3
path: tedlium-release3_test/tremolo.3-*
- split: tremolo.4
path: tedlium-release3_test/tremolo.4-*
- split: treble.1
path: tedlium-release3_test/treble.1-*
- split: treble.2
path: tedlium-release3_test/treble.2-*
- split: treble.3
path: tedlium-release3_test/treble.3-*
- split: treble.4
path: tedlium-release3_test/treble.4-*
- split: bass.1
path: tedlium-release3_test/bass.1-*
- split: bass.2
path: tedlium-release3_test/bass.2-*
- split: bass.3
path: tedlium-release3_test/bass.3-*
- split: bass.4
path: tedlium-release3_test/bass.4-*
- split: chorus.1
path: tedlium-release3_test/chorus.1-*
- split: chorus.2
path: tedlium-release3_test/chorus.2-*
- split: chorus.4
path: tedlium-release3_test/chorus.4-*
- split: chorus.3
path: tedlium-release3_test/chorus.3-*
---
# Dataset Card for "speech_robust_bench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Forceless/PPTAgent | Forceless | "2024-10-20T05:51:45Z" | 11,581 | 3 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-18T04:49:53Z" | ---
dataset_info:
features:
- name: filename
dtype: string
- name: size
dtype: int64
- name: url
dtype: string
- name: license
dtype: string
- name: title
dtype: string
- name: created
dtype: string
- name: updated
dtype: string
- name: doi
dtype: string
- name: checksum
dtype: string
- name: page
dtype: int64
- name: topic
dtype: string
- name: filetype
dtype: string
splits:
- name: pptx
num_bytes: 317828
num_examples: 761
- name: pdf
num_bytes: 253893
num_examples: 603
download_size: 249178
dataset_size: 571721
configs:
- config_name: default
data_files:
- split: pptx
path: data/pptx-*
- split: pdf
path: data/pdf-*
---
|
andstor/the_pile_github | andstor | "2023-03-20T23:39:53Z" | 11,525 | 8 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2101.00027",
"arxiv:2201.07311",
"region:us"
] | [
"text-generation",
"fill-mask",
"text-classification"
] | "2023-03-07T15:53:05Z" | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: The Pile GitHub
size_categories: []
source_datasets:
- original
tags: []
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids: []
---
# Dataset Card for The Pile GitHub
## Table of Contents
- [Dataset Card for Smart Contracts](#dataset-card-for-the-pile-github)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ElutherAI](https://pile.eleuther.ai)
- **Repository:** [GitHub](https://github.com/andstor/the-pile-github)
- **Paper:** [arXiv](https://arxiv.org/abs/2101.00027)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is the GitHub subset of EleutherAi/The Pile dataset and contains GitHub repositories. The programming languages are identified using the [guesslang library](https://github.com/yoeo/guesslang). A total of 54 programming languages are included in the dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The following languages are covered by the dataset:
```
'Assembly', 'Batchfile', 'C', 'C#', 'C++', 'CMake', 'COBOL', 'CSS', 'CSV', 'Clojure', 'CoffeeScript', 'DM', 'Dart', 'Dockerfile', 'Elixir', 'Erlang', 'Fortran', 'Go', 'Groovy', 'HTML', 'Haskell', 'INI', 'JSON', 'Java', 'JavaScript', 'Julia', 'Kotlin', 'Lisp', 'Lua', 'Makefile', 'Markdown', 'Matlab', 'None', 'OCaml', 'Objective-C', 'PHP', 'Pascal', 'Perl', 'PowerShell', 'Prolog', 'Python', 'R', 'Ruby', 'Rust', 'SQL', 'Scala', 'Shell', 'Swift', 'TOML', 'TeX', 'TypeScript', 'Verilog', 'Visual Basic', 'XML', 'YAML'
```
The [guesslang library](https://github.com/yoeo/guesslang) is used to identify the programming languages. It has a guessing accuracy of above 90%. Hence, there will be some misclassifications in the language identification.
## Dataset Structure
### Data Instances
[More Information Needed]
```
{
'text': ...,
'meta': {'language': ...}
}
```
### Data Fields
- `text` (`string`): the source code.
- `meta` (`dict`): the metadata of the source code.
- `language` (`string`): the programming language of the source code.
### Data Splits
[More Information Needed]
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The data is purely a subset of the [EleutherAI/The Pile dataset](https://huggingface.co/datasets/the_pile). See the original [dataset](https://arxiv.org/abs/2201.07311) for more details.
## Additional Information
### Licensing Information
The Pile dataset was released on January 1st, 2021. It is licensed under the MIT License. See the [dataset](https://arxiv.org/abs/2201.07311) for more details.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
### Contributions
Thanks to [@andstor](https://github.com/andstor) for adding this dataset. |
FrancophonIA/UFAL_Parallel_Corpus_of_North_Levantine_1.0 | FrancophonIA | "2024-10-31T19:11:18Z" | 11,513 | 0 | [
"multilinguality:multilingual",
"language:en",
"language:fr",
"language:arb",
"language:de",
"language:el",
"language:es",
"language:apc",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-10-31T18:59:32Z" | ---
language:
- en
- fr
- arb
- de
- el
- es
- apc
multilinguality:
- multilingual
license: cc-by-nc-sa-4.0
configs:
- config_name: apc
data_files:
- split: train
path: "apc.txt"
- config_name: arb
data_files:
- split: train
path: "arb.txt"
- config_name: arb-eng
data_files:
- split: train
path: "arb-eng.txt"
- config_name: deu
data_files:
- split: train
path: "deu.txt"
- config_name: deu-eng
data_files:
- split: train
path: "deu-eng.txt"
- config_name: ell
data_files:
- split: train
path: "ell.txt"
- config_name: ell-eng
data_files:
- split: train
path: "ell-eng.txt"
- config_name: ar_AR
data_files:
- split: eng
path: "eng.txt"
- config_name: ar_AR
data_files:
- split: eng-fra
path: "eng-fra.txt"
- config_name: eng-spa
data_files:
- split: train
path: "eng-spa.txt"
- config_name: fra
data_files:
- split: train
path: "fra.txt"
- config_name: spa
data_files:
- split: train
path: "spa.txt"
---
> [!NOTE]
> Dataset origin: https://zenodo.org/records/4012218
# UFAL Parallel Corpus of North Levantine 1.0
March 10, 2023
## Authors
Shadi Saleh <[[email protected]](mailto:[email protected])>
Hashem Sellat <[[email protected]](mailto:[email protected])>
Mateusz Krubiński <[[email protected]](mailto:[email protected])>
Adam Posppíšil <[[email protected]](mailto:[email protected])>
Petr Zemánek <[[email protected]](mailto:[email protected])>
Pavel Pecina <[[email protected]](mailto:[email protected])>
## Overview
This is the first release of the UFAL Parallel Corpus of North Levantine, compiled by the Institute of Formal and Applied Linguistics (ÚFAL) at Charles University within the Welcome project (https://welcome-h2020.eu/). The corpus consists of 120,600 multiparallel sentences in English, French, German, Greek, Spanish, and Standard Arabic selected from the OpenSubtitles2018 corpus [1] and manually translated into the North Levantine Arabic language. The corpus was created for the purpose of training machine translation for North Levantine and the other languages.
## Data processing
In OpenSubtitles2018, we identified 3,661,627 sentences in English that were aligned with their translations in all of the following languages: arb, fra, deu, ell, spa, and filtered out those that matched any of the following conditions:
- presence of non-standard characters in the English side (only English alphabet, numbers and the following characters allowed: .!?,:; '$%£€) to reduce noise
- non-capital first letter in the English side (to avoid incomplete sentences)
- presence of less than two infrequent words (to increase lexical richness)
- presence of vulgar words in the English side
Then, we removed exact and near duplicates (detected in the English side) and sampled a subset of approximately 1 million words in the English side. This resulted in 120,771 multiparallel sentences with an average length of 8.28 words per sentence in the English side.
The sentences in Standard Arabic were then manually translated to North Levantine Arabic by native speakers. A few erroneous translations were automatically detected (e.g. empty or unfinished translations) and discarded. The remaining translations were aligned with the other languages through Standard Arabic and English. The final corpus comprises 120,600 sentences in English, Spanish, Greek, German, French, Standard Arabic, and the newly added North Levantine Arabic. The table below shows some overall statistics. The languages of the data files are denoted by their ISO 639-3 codes.
| language | ISO 639-3 code | #words |
|:----------------------:|:---------------:|:-------:|
| North Levantine Arabic | apc | 738,812 |
| Standard Arabic | arb | 802,313 |
| German | deu | 940,234 |
| Greek | ell | 869,543 |
| English | eng | 999,193 |
| French | fra | 956,208 |
| Spanish | spa | 920,922 |
The translations are provided in seven files, each file contains data in one language. The files aligned through the line numbers; the order of lines is random. We provide linking of the English-centred sentence pairs to the original data in OpenSubtitles2018. This information is stored in the *.ids files that are aligned through the line numbers with the corresponding translations. Each line contains tab-separated items: the source filename, the target filename, space-separated positions of the source sentence in the source file, space-separated positions of the target sentence in the target file.
## References
[1] Pierre Lison, Jörg Tiedemann, and Milen Kouylekov. 2018. OpenSubtitles2018: Statistical Rescoring of Sentence Alignments in Large, Noisy Parallel Corpora. Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pages 1742–1748. Miyazaki, Japan.
## Acknowledgement
The work was supported by the European Commission via the H2020 Program, project WELCOME, grant agreement: 870930.
## Citation
```
@misc{11234/1-5033,
title = {{UFAL} Parallel Corpus of North Levantine 1.0},
author = {Sellat, Hashem and Saleh, Shadi and Krubi{\'n}ski, Mateusz and Posp{\'{\i}}{\v s}il, Adam and Zem{\'a}nek, Petr and Pecina, Pavel},
url = {http://hdl.handle.net/11234/1-5033},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {Creative Commons - Attribution-{NonCommercial}-{ShareAlike} 4.0 International ({CC} {BY}-{NC}-{SA} 4.0)},
year = {2023} }``` |
Jiayi-Pan/Countdown-Tasks-3to4 | Jiayi-Pan | "2025-01-23T00:56:52Z" | 11,507 | 49 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-01-23T00:56:50Z" | ---
dataset_info:
features:
- name: target
dtype: int64
- name: nums
sequence: int64
splits:
- name: train
num_bytes: 19650960
num_examples: 490364
download_size: 2845904
dataset_size: 19650960
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
proj-persona/PersonaHub | proj-persona | "2025-03-04T22:01:42Z" | 11,473 | 549 | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:fill-mask",
"task_categories:table-question-answering",
"task_categories:text2text-generation",
"language:en",
"language:zh",
"license:cc-by-nc-sa-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.20094",
"region:us",
"synthetic",
"text",
"math",
"reasoning",
"instruction",
"tool"
] | [
"text-generation",
"text-classification",
"token-classification",
"fill-mask",
"table-question-answering",
"text2text-generation"
] | "2024-06-28T16:35:21Z" | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
- text-classification
- token-classification
- fill-mask
- table-question-answering
- text2text-generation
language:
- en
- zh
tags:
- synthetic
- text
- math
- reasoning
- instruction
- tool
size_categories:
- 100M<n<1B
configs:
- config_name: math
data_files: math.jsonl
- config_name: instruction
data_files: instruction.jsonl
- config_name: reasoning
data_files: reasoning.jsonl
- config_name: knowledge
data_files: knowledge.jsonl
- config_name: npc
data_files: npc.jsonl
- config_name: tool
data_files: tool.jsonl
- config_name: persona
data_files: persona.jsonl
- config_name: elite_persona
data_files:
- split: train
path: ElitePersonas/*
---
# Scaling Synthetic Data Creation with 1,000,000,000 Personas
This repo releases data introduced in our paper [Scaling Synthetic Data Creation with 1,000,000,000 Personas](https://arxiv.org/pdf/2406.20094):
We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce **PERSONA HUB** – a collection of **1 billion diverse personas** automatically curated from web data. These 1 billion personas (~13% of the world's total population), acting as distributed carriers of world knowledge, can tap into almost every perspective encapsulated within the LLM, thereby facilitating the creation of diverse synthetic data at scale for various scenarios. By showcasing PERSONA HUB’s use cases in synthesizing high-quality **mathematical and logical reasoning** problems, **instructions** (i.e., user prompts), **knowledge-rich texts**, **game NPCs** and **tools** (functions) at scale, we demonstrate persona-driven data synthesis is versatile, scalable, flexible, and easy to use, potentially driving a paradigm shift in synthetic data creation and applications in practice, which may have a profound impact on LLM research and development.
<div align="center">
<img src="./assets/persona_overview.png" width="90%">
</div>
## Data Release
### Synthetic Data Samples
To facilitate research in persona-driven data synthesis, we are initially releasing following synthetic data samples we created with various personas, including:
* **50,000 math problems**
* **50,000 logical reasoning problems**
* **50,000 instructions**
* **10,000 knowledge-rich texts**
* **10,000 game NPCs**
* **5,000 tools (functions)**
### Persona Hub
We also release a subset of our PERSONA HUB, including:
* **200,000 personas (early preview)**
* **370,000,000 elite personas (added in Feb 2025)**
## Run Demo
One can try the demo to synthesize data with PERSONA HUB simply by running code in https://github.com/tencent-ailab/persona-hub:
```bash
# ensure that you have installed datasets and openai (pip install datasets openai) and configured the openai_api_key before running
bash demo_openai_synthesize.sh # using gpt4o to synthesize data with PERSONA HUB
```
or
```bash
# ensure that you have installed datasets, transformers and vllm (pip install datasets transformers vllm) before running
bash demo_vllm_synthesize.sh # using open-sourced models to synthesize data with PERSONA HUB
```
Note that the data synthesis prompt templates we provide are for reference only. You can customize your desired prompts in `code/prompt_templates.py`.
## Argilla
You can also access this dataset in [Argilla space](https://argilla-data-explorers.hf.space/), as introduced in the following video:
* Video: https://youtu.be/timmCn8Nr6g?feature=shared
## Contact
* Please email `[email protected]` or `[email protected]`
* Github page: https://github.com/tencent-ailab/persona-hub
## Disclaimer
PERSONA HUB can facilitate synthetic data creation at a billion-scale to simulate diverse inputs (i.e., use cases) from a wide variety of real-world users. If this data is used as input to query a target LLM to obtain its outputs at scale, there is a high risk that the LLM's knowledge, intelligence and capabilities will be dumped and easily replicated, thereby challenging the leading position of the most powerful LLMs. It is crucial to avoid misuse and ensure ethical and responsible application to prevent privacy violations and other ethical concerns.
The released data is all generated by public available models (GPT-4, Llama-3 and Qwen), and is intended for research purposes only. Users also must comply with the respective license agreements and usage policies of these models when using the synthesized data. The data may contain inaccuracies, unsafe content, or biases, for which we cannot be held responsible. Please evaluate its accuracy and suitability before use. Tencent and its licensors provide the data AS-IS, without warranty of any kind, express or implied. The views and opinions expressed in the data do not necessarily reflect those of Tencent. |
HuggingFaceM4/Docmatix | HuggingFaceM4 | "2024-08-26T08:15:21Z" | 11,465 | 259 | [
"task_categories:visual-question-answering",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2408.12637",
"region:us",
"docvqa"
] | [
"visual-question-answering"
] | "2024-07-17T11:33:00Z" | ---
language:
- en
license: mit
size_categories:
- 1M<n<10M
task_categories:
- visual-question-answering
pretty_name: Docmatix
tags:
- docvqa
configs:
- config_name: images
data_files:
- split: train
path: data/train-*
- config_name: pdf
data_files:
- split: train
path: pdf/train-*
- config_name: zero-shot-exp
data_files:
- split: train
path: zero-shot-exp/train-*
- split: test
path: zero-shot-exp/test-*
dataset_info:
- config_name: images
features:
- name: images
sequence: image
- name: texts
list:
- name: user
dtype: string
- name: assistant
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 552957537722.77
num_examples: 1273215
download_size: 159404414330
dataset_size: 552957537722.77
- config_name: pdf
features:
- name: pdf
dtype: binary
- name: texts
list:
- name: user
dtype: string
- name: assistant
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 458612867150
num_examples: 1273245
download_size: 431829972210
dataset_size: 458612867150
- config_name: zero-shot-exp
features:
- name: images
sequence: image
- name: texts
list:
- name: user
dtype: string
- name: assistant
dtype: string
- name: source
dtype: string
splits:
- name: test
num_bytes: 68900253.0
num_examples: 200
- name: train
num_bytes: 578335690.5
num_examples: 1700
download_size: 642963847
dataset_size: 647235943.5
---
# Dataset Card for Docmatix

## Dataset description
Docmatix is part of the Idefics3 release (stay tuned).
It is a massive dataset for Document Visual Question Answering that was used for the fine-tuning of the vision-language model Idefics3.
## Load the dataset
To load the dataset, install the library `datasets` with `pip install datasets`. Then,
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceM4/Docmatix")
```
If you want the dataset to link to the pdf files as binaries instead of the images, do:
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceM4/Docmatix", "pdf")
```
## Data fields
An example of a sample looks as follows:
```
{
"images" = [PIL.Image]
"texts" = [
{ "user": "What is the purpose of the Confirmation Statement mentioned in the document?",
"assistant": "The purpose of the Confirmation Statement is to confirm that all information required to be delivered by the company to the registrar in relation to the confirmation period concerned has been delivered or is being delivered at the same time as the confirmation statement.",
"source": "PDFA key: 244" },
{ "user": "When was the filing received as per the document?",
"assistant": "The filing was received for filing in Electronic Format on the 23/03/2021.",
"source": "PDFA key: 244" },
]
}
```
In `images`, there is a list of up to 4 images, to be placed before the text.
In `texts`, there is a conversation between a user and an assistant about the images that is represented by a list of turns.
## Comparison to other DocVQA datasets
| Dataset | # images | # Q/A pairs | # tokens |
|----------------------|----------|-------------|------------|
| *Document visual question answering* |
| **Docmatix** | **2,444,750**| **9,500,000** | **390,000,000**|
| DocVQA | 10,189 | 39,463 | 337,829 |
| TextCaps | 21,953 | 21,953 | 389,658 |
| TextVQA | 21,953 | 34,602 | 181,918 |
| ST-VQA | 17,247 | 23,121 | 127,846 |
| OCR-VQA | 165,746 | 801,579 | 6,073,824 |
| VisualMRC | 3,027 | 11,988 | 168,828 |
| IAM | 5,663 | 5,663 | 144,216 |
| InfoVQA | 2,118 | 10,074 | 61,048 |
| Diagram image-to-text| 300 | 300 | 22,196 |
# Citation
**BibTeX:**
```bibtex
@misc{laurençon2024building,
title={Building and better understanding vision-language models: insights and future directions.},
author={Hugo Laurençon and Andrés Marafioti and Victor Sanh and Léo Tronchon},
year={2024},
eprint={2408.12637},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
mteb/sts12-sts | mteb | "2022-09-27T19:11:50Z" | 11,454 | 7 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-20T10:47:29Z" | ---
language:
- en
--- |
alvin319/semantic-memorization-partial-2023-09-03 | alvin319 | "2023-09-04T09:39:21Z" | 11,344 | 0 | [
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-09-04T01:07:54Z" | ---
license: mit
configs:
- config_name: default
data_files:
- split: pile_deduped_70m
path: data/pile_deduped_70m-*
- split: memories_deduped_70m
path: data/memories_deduped_70m-*
- split: pile_deduped_160m
path: data/pile_deduped_160m-*
- split: memories_deduped_160m
path: data/memories_deduped_160m-*
- split: pile_deduped_410m
path: data/pile_deduped_410m-*
- split: memories_deduped_410m
path: data/memories_deduped_410m-*
- split: pile_deduped_1b
path: data/pile_deduped_1b-*
- split: memories_deduped_1b
path: data/memories_deduped_1b-*
- split: pile_deduped_1.4b
path: data/pile_deduped_1.4b-*
- split: memories_deduped_1.4b
path: data/memories_deduped_1.4b-*
- split: pile_deduped_2.8b
path: data/pile_deduped_2.8b-*
- split: memories_deduped_2.8b
path: data/memories_deduped_2.8b-*
- split: pile_deduped_6.9b
path: data/pile_deduped_6.9b-*
- split: memories_deduped_6.9b
path: data/memories_deduped_6.9b-*
- split: pile_deduped_12b
path: data/pile_deduped_12b-*
- split: memories_deduped_12b
path: data/memories_deduped_12b-*
- split: pile_duped_70m
path: data/pile_duped_70m-*
- split: memories_duped_70m
path: data/memories_duped_70m-*
- split: pile_duped_160m
path: data/pile_duped_160m-*
- split: memories_duped_160m
path: data/memories_duped_160m-*
- split: pile_duped_410m
path: data/pile_duped_410m-*
- split: memories_duped_410m
path: data/memories_duped_410m-*
- split: pile_duped_1b
path: data/pile_duped_1b-*
- split: memories_duped_1b
path: data/memories_duped_1b-*
- split: pile_duped_1.4b
path: data/pile_duped_1.4b-*
- split: memories_duped_1.4b
path: data/memories_duped_1.4b-*
- split: pile_duped_2.8b
path: data/pile_duped_2.8b-*
- split: memories_duped_2.8b
path: data/memories_duped_2.8b-*
- split: pile_duped_6.9b
path: data/pile_duped_6.9b-*
- split: memories_duped_6.9b
path: data/memories_duped_6.9b-*
- split: pile_duped_12b
path: data/pile_duped_12b-*
- split: memories_duped_12b
path: data/memories_duped_12b-*
dataset_info:
features:
- name: sequence_id
dtype: int64
- name: tokens
sequence: int64
- name: memorized_frequencies
sequence: int64
- name: non_memorized_frequencies
sequence: int64
- name: memorization_score
dtype: float64
- name: sequence_frequency
dtype: int64
splits:
- name: pile_deduped_70m
num_bytes: 7860000000
num_examples: 5000000
- name: memories_deduped_70m
num_bytes: 646796256
num_examples: 411448
- name: pile_deduped_160m
num_bytes: 7860000000
num_examples: 5000000
- name: memories_deduped_160m
num_bytes: 913638540
num_examples: 581195
- name: pile_deduped_410m
num_bytes: 7860000000
num_examples: 5000000
- name: memories_deduped_410m
num_bytes: 1274953308
num_examples: 811039
- name: pile_deduped_1b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_deduped_1b
num_bytes: 1623663780
num_examples: 1032865
- name: pile_deduped_1.4b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_deduped_1.4b
num_bytes: 1647608484
num_examples: 1048097
- name: pile_deduped_2.8b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_deduped_2.8b
num_bytes: 2130391692
num_examples: 1355211
- name: pile_deduped_6.9b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_deduped_6.9b
num_bytes: 2641422168
num_examples: 1680294
- name: pile_deduped_12b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_deduped_12b
num_bytes: 2941549980
num_examples: 1871215
- name: pile_duped_70m
num_bytes: 7860000000
num_examples: 5000000
- name: memories_duped_70m
num_bytes: 729334116
num_examples: 463953
- name: pile_duped_160m
num_bytes: 7860000000
num_examples: 5000000
- name: memories_duped_160m
num_bytes: 1084165956
num_examples: 689673
- name: pile_duped_410m
num_bytes: 7860000000
num_examples: 5000000
- name: memories_duped_410m
num_bytes: 1525376052
num_examples: 970341
- name: pile_duped_1b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_duped_1b
num_bytes: 1974653652
num_examples: 1256141
- name: pile_duped_1.4b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_duped_1.4b
num_bytes: 2159490984
num_examples: 1373722
- name: pile_duped_2.8b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_duped_2.8b
num_bytes: 2633221044
num_examples: 1675077
- name: pile_duped_6.9b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_duped_6.9b
num_bytes: 3334163268
num_examples: 2120969
- name: pile_duped_12b
num_bytes: 7860000000
num_examples: 5000000
- name: memories_duped_12b
num_bytes: 3745016472
num_examples: 2382326
download_size: 11256676441
dataset_size: 156765445752
---
This dataset is a partial computation of metrics (memorized token frequencies, non-memorized token frequencies, sequence frequencies) needed for [research](https://github.com/EleutherAI/semantic-memorization). |
AmazonScience/MultilingualMultiModalClassification | AmazonScience | "2024-12-06T14:00:39Z" | 11,335 | 2 | [
"license:cc-by-4.0",
"region:us"
] | null | "2023-05-12T20:22:46Z" | ---
license: cc-by-4.0
dataset_info:
- config_name: multieurlex-doc-bg
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 407278322
num_examples: 15979
- name: validation
num_bytes: 121021498
num_examples: 4997
- name: test
num_bytes: 126194699
num_examples: 4988
download_size: 94161088
dataset_size: 654494519
- config_name: multieurlex-doc-cs
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 465064539
num_examples: 23056
- name: validation
num_bytes: 98206202
num_examples: 4997
- name: test
num_bytes: 101905013
num_examples: 4988
download_size: 103341160
dataset_size: 665175754
- config_name: multieurlex-doc-da
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1137431321
num_examples: 54806
- name: validation
num_bytes: 100630592
num_examples: 4997
- name: test
num_bytes: 103660755
num_examples: 4988
download_size: 211774968
dataset_size: 1341722668
- config_name: multieurlex-doc-de
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1156790099
num_examples: 54804
- name: test
num_bytes: 108731388
num_examples: 4988
- name: validation
num_bytes: 105635067
num_examples: 4997
download_size: 214358454
dataset_size: 1371156554
- config_name: multieurlex-doc-el
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1412326683
num_examples: 54828
- name: validation
num_bytes: 127450631
num_examples: 4997
- name: test
num_bytes: 132083962
num_examples: 4988
download_size: 249838066
dataset_size: 1671861276
- config_name: multieurlex-doc-en
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1208998381
num_examples: 54808
- name: test
num_bytes: 110325080
num_examples: 4988
- name: validation
num_bytes: 106866095
num_examples: 4997
download_size: 223853363
dataset_size: 1426189556
- config_name: multieurlex-doc-es
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1354212928
num_examples: 52621
- name: test
num_bytes: 128661948
num_examples: 4988
- name: validation
num_bytes: 124535827
num_examples: 4997
download_size: 254828898
dataset_size: 1607410703
- config_name: multieurlex-doc-et
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 385076032
num_examples: 22986
- name: validation
num_bytes: 82795960
num_examples: 4997
- name: test
num_bytes: 85548380
num_examples: 4988
download_size: 87523878
dataset_size: 553420372
- config_name: multieurlex-doc-fi
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 746551995
num_examples: 42362
- name: validation
num_bytes: 88644474
num_examples: 4997
- name: test
num_bytes: 90495504
num_examples: 4988
download_size: 144867468
dataset_size: 925691973
- config_name: multieurlex-doc-fr
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1308833036
num_examples: 54804
- name: validation
num_bytes: 117528920
num_examples: 4997
- name: test
num_bytes: 122076609
num_examples: 4988
download_size: 244074331
dataset_size: 1548438565
- config_name: multieurlex-doc-hr
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 166426724
num_examples: 7944
- name: validation
num_bytes: 52267708
num_examples: 2499
- name: test
num_bytes: 99712738
num_examples: 4988
download_size: 49985102
dataset_size: 318407170
- config_name: multieurlex-doc-hu
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 430043841
num_examples: 22542
- name: validation
num_bytes: 94622333
num_examples: 4997
- name: test
num_bytes: 97747785
num_examples: 4988
download_size: 97614905
dataset_size: 622413959
- config_name: multieurlex-doc-it
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1249061937
num_examples: 54805
- name: validation
num_bytes: 110908837
num_examples: 4997
- name: test
num_bytes: 114867681
num_examples: 4987
download_size: 231926930
dataset_size: 1474838455
- config_name: multieurlex-doc-nl
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1286183580
num_examples: 54803
- name: validation
num_bytes: 112858254
num_examples: 4997
- name: test
num_bytes: 116992911
num_examples: 4988
download_size: 237826260
dataset_size: 1516034745
- config_name: multieurlex-doc-pl
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 471614388
num_examples: 23063
- name: validation
num_bytes: 101196012
num_examples: 4997
- name: test
num_bytes: 104384366
num_examples: 4988
download_size: 104236091
dataset_size: 677194766
- config_name: multieurlex-doc-pt
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1269347766
num_examples: 52205
- name: validation
num_bytes: 117194055
num_examples: 4997
- name: test
num_bytes: 120747746
num_examples: 4988
download_size: 238776517
dataset_size: 1507289567
- config_name: multieurlex-doc-ro
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 359230898
num_examples: 15914
- name: validation
num_bytes: 107876284
num_examples: 4997
- name: test
num_bytes: 112291364
num_examples: 4988
download_size: 89545760
dataset_size: 579398546
- config_name: multieurlex-doc-sv
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 867755140
num_examples: 42356
- name: validation
num_bytes: 101193984
num_examples: 4997
- name: test
num_bytes: 103453976
num_examples: 4988
download_size: 166948914
dataset_size: 1072403100
- config_name: wiki-doc-ar-img
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Earthquake
'1': SolarEclipse
'2': MusicFestival
'3': MilitaryConflict
'4': FilmFestival
'5': Convention
'6': FootballMatch
'7': OlympicEvent
'8': GrandPrix
'9': GolfTournament
'10': WomensTennisAssociationTournament
'11': TennisTournament
'12': SoccerTournament
'13': WrestlingEvent
'14': HorseRace
'15': CyclingRace
'16': MixedMartialArtsEvent
'17': Election
'18': SoccerClubSeason
'19': NationalFootballLeagueSeason
'20': NCAATeamSeason
'21': BaseballSeason
'22': VideoGame
'23': BiologicalDatabase
'24': EurovisionSongContestEntry
'25': Album
'26': Musical
'27': ClassicalMusicComposition
'28': ArtistDiscography
'29': Single
'30': Poem
'31': Magazine
'32': Newspaper
'33': AcademicJournal
'34': Play
'35': Manga
'36': ComicStrip
'37': Anime
'38': HollywoodCartoon
'39': MusicGenre
'40': Grape
'41': Conifer
'42': Fern
'43': Moss
'44': GreenAlga
'45': CultivatedVariety
'46': Cycad
'47': Arachnid
'48': Fish
'49': Insect
'50': Reptile
'51': Mollusca
'52': Bird
'53': Amphibian
'54': RaceHorse
'55': Crustacean
'56': Fungus
'57': Lighthouse
'58': Theatre
'59': RollerCoaster
'60': Airport
'61': RailwayStation
'62': Road
'63': RailwayLine
'64': Bridge
'65': RoadTunnel
'66': Dam
'67': CricketGround
'68': Stadium
'69': Racecourse
'70': GolfCourse
'71': Prison
'72': Hospital
'73': Museum
'74': Hotel
'75': Library
'76': Restaurant
'77': ShoppingMall
'78': HistoricBuilding
'79': Castle
'80': Volcano
'81': MountainPass
'82': Glacier
'83': Canal
'84': River
'85': Lake
'86': Mountain
'87': Cave
'88': MountainRange
'89': Galaxy
'90': ArtificialSatellite
'91': Planet
'92': Town
'93': Village
'94': Diocese
'95': AutomobileEngine
'96': SupremeCourtOfTheUnitedStatesCase
'97': MilitaryPerson
'98': Religious
'99': Engineer
'100': BusinessPerson
'101': SportsTeamMember
'102': SoccerManager
'103': Chef
'104': Philosopher
'105': CollegeCoach
'106': ScreenWriter
'107': Historian
'108': Poet
'109': President
'110': PrimeMinister
'111': Congressman
'112': Senator
'113': Mayor
'114': MemberOfParliament
'115': Governor
'116': Monarch
'117': PlayboyPlaymate
'118': Cardinal
'119': Saint
'120': Pope
'121': ChristianBishop
'122': BeautyQueen
'123': RadioHost
'124': HandballPlayer
'125': Cricketer
'126': Jockey
'127': SumoWrestler
'128': AmericanFootballPlayer
'129': LacrossePlayer
'130': TennisPlayer
'131': AmateurBoxer
'132': SoccerPlayer
'133': Rower
'134': TableTennisPlayer
'135': BeachVolleyballPlayer
'136': SpeedwayRider
'137': FormulaOneRacer
'138': NascarDriver
'139': Swimmer
'140': IceHockeyPlayer
'141': FigureSkater
'142': Skater
'143': Curler
'144': Skier
'145': GolfPlayer
'146': SquashPlayer
'147': PokerPlayer
'148': BadmintonPlayer
'149': ChessPlayer
'150': RugbyPlayer
'151': DartsPlayer
'152': NetballPlayer
'153': MartialArtist
'154': Gymnast
'155': Canoeist
'156': GaelicGamesPlayer
'157': HorseRider
'158': BaseballPlayer
'159': Cyclist
'160': Bodybuilder
'161': AustralianRulesFootballPlayer
'162': BasketballPlayer
'163': Ambassador
'164': Baronet
'165': Model
'166': Architect
'167': Judge
'168': Economist
'169': Journalist
'170': Painter
'171': Comedian
'172': ComicsCreator
'173': ClassicalMusicArtist
'174': FashionDesigner
'175': AdultActor
'176': VoiceActor
'177': Photographer
'178': HorseTrainer
'179': Entomologist
'180': Medician
'181': SoapCharacter
'182': AnimangaCharacter
'183': MythologicalFigure
'184': Noble
'185': Astronaut
'186': OfficeHolder
'187': PublicTransitSystem
'188': BusCompany
'189': LawFirm
'190': Winery
'191': RecordLabel
'192': Brewery
'193': Airline
'194': Publisher
'195': Bank
'196': PoliticalParty
'197': Legislature
'198': Band
'199': BasketballLeague
'200': SoccerLeague
'201': IceHockeyLeague
'202': BaseballLeague
'203': RugbyLeague
'204': MilitaryUnit
'205': University
'206': School
'207': CyclingTeam
'208': CanadianFootballTeam
'209': BasketballTeam
'210': AustralianFootballTeam
'211': HockeyTeam
'212': HandballTeam
'213': CricketTeam
'214': RugbyClub
'215': TradeUnion
'216': RadioStation
'217': BroadcastNetwork
'218': TelevisionStation
splits:
- name: train
num_bytes: 7919491304.875
num_examples: 8129
- name: test
num_bytes: 1691686089.125
num_examples: 1743
- name: validation
num_bytes: 1701166069.25
num_examples: 1742
download_size: 11184835705
dataset_size: 11312343463.25
- config_name: wiki-doc-ar-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 8062791605.746
num_examples: 8129
- name: test
num_bytes: 1722071386.382
num_examples: 1743
- name: validation
num_bytes: 1731948280.766
num_examples: 1742
download_size: 11226133595
dataset_size: 11516811272.894001
- config_name: wiki-doc-de-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 59980253508.125
num_examples: 41047
- name: validation
num_bytes: 12842370238.5
num_examples: 8796
- name: test
num_bytes: 12835845039.5
num_examples: 8796
download_size: 84274708249
dataset_size: 85658468786.125
- config_name: wiki-doc-en-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 201861788293.75
num_examples: 152506
- name: validation
num_bytes: 43199951001.0
num_examples: 32680
- name: test
num_bytes: 43177176523.0
num_examples: 32680
download_size: 282546982586
dataset_size: 288238915817.75
- config_name: wiki-doc-es-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 58485685843.875
num_examples: 42713
- name: validation
num_bytes: 12550991282.569
num_examples: 9153
- name: test
num_bytes: 12546829230.442
num_examples: 9154
download_size: 82063829353
dataset_size: 83583506356.886
- config_name: wiki-doc-fr-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 40498485460.875
num_examples: 33329
- name: validation
num_bytes: 8641683528.108
num_examples: 7142
- name: test
num_bytes: 8649896334.108
num_examples: 7142
download_size: 56468886228
dataset_size: 57790065323.091
- config_name: wiki-doc-it-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 25293800981.25
num_examples: 20166
- name: validation
num_bytes: 5433600428.554
num_examples: 4321
- name: test
num_bytes: 5411100552.106
num_examples: 4322
download_size: 35441755215
dataset_size: 36138501961.91
- config_name: wiki-doc-ja-img
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 30506965411.75
num_examples: 23250
- name: test
num_bytes: 6540291049.322
num_examples: 4982
- name: validation
num_bytes: 6513584731.193
num_examples: 4983
download_size: 43248429810
dataset_size: 43560841192.265
- config_name: wiki-doc-ja-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 30650799906.25
num_examples: 23254
- name: validation
num_bytes: 6543258936.193
num_examples: 4983
- name: test
num_bytes: 6570176552.322
num_examples: 4982
download_size: 43344042661
dataset_size: 43764235394.765
- config_name: wiki-doc-pt-img
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 21744787468.0
num_examples: 20168
- name: test
num_bytes: 4702448837.106
num_examples: 4322
- name: validation
num_bytes: 4646765273.106
num_examples: 4322
download_size: 30769070664
dataset_size: 31094001578.211998
- config_name: wiki-doc-pt-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 22164275072.0
num_examples: 20168
- name: validation
num_bytes: 4735717368.106
num_examples: 4322
- name: test
num_bytes: 4792666148.106
num_examples: 4322
download_size: 30891429558
dataset_size: 31692658588.211998
- config_name: wiki-doc-pt-merged-v2
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 22164275065.16
num_examples: 20168
- name: validation
num_bytes: 4735717370.818
num_examples: 4322
- name: test
num_bytes: 4792666150.818
num_examples: 4322
download_size: 30891429558
dataset_size: 31692658586.796
- config_name: wiki-doc-zh-img
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 30248140475.625
num_examples: 23099
- name: test
num_bytes: 6471322916.25
num_examples: 4950
- name: validation
num_bytes: 6507120137.25
num_examples: 4950
download_size: 42958276266
dataset_size: 43226583529.125
- config_name: wiki-doc-zh-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 30382212749.625
num_examples: 23099
- name: test
num_bytes: 6499933446.25
num_examples: 4950
- name: validation
num_bytes: 6536010774.25
num_examples: 4950
download_size: 43027961181
dataset_size: 43418156970.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: multieurlex-doc-bg
data_files:
- split: train
path: multieurlex-doc-bg/train-*
- split: validation
path: multieurlex-doc-bg/validation-*
- split: test
path: multieurlex-doc-bg/test-*
- config_name: multieurlex-doc-cs
data_files:
- split: train
path: multieurlex-doc-cs/train-*
- split: validation
path: multieurlex-doc-cs/validation-*
- split: test
path: multieurlex-doc-cs/test-*
- config_name: multieurlex-doc-da
data_files:
- split: train
path: multieurlex-doc-da/train-*
- split: validation
path: multieurlex-doc-da/validation-*
- split: test
path: multieurlex-doc-da/test-*
- config_name: multieurlex-doc-de
data_files:
- split: train
path: multieurlex-doc-de/train-*
- split: test
path: multieurlex-doc-de/test-*
- split: validation
path: multieurlex-doc-de/validation-*
- config_name: multieurlex-doc-el
data_files:
- split: train
path: multieurlex-doc-el/train-*
- split: validation
path: multieurlex-doc-el/validation-*
- split: test
path: multieurlex-doc-el/test-*
- config_name: multieurlex-doc-en
data_files:
- split: train
path: multieurlex-doc-en/train-*
- split: test
path: multieurlex-doc-en/test-*
- split: validation
path: multieurlex-doc-en/validation-*
- config_name: multieurlex-doc-es
data_files:
- split: train
path: multieurlex-doc-es/train-*
- split: test
path: multieurlex-doc-es/test-*
- split: validation
path: multieurlex-doc-es/validation-*
- config_name: multieurlex-doc-et
data_files:
- split: train
path: multieurlex-doc-et/train-*
- split: validation
path: multieurlex-doc-et/validation-*
- split: test
path: multieurlex-doc-et/test-*
- config_name: multieurlex-doc-fi
data_files:
- split: train
path: multieurlex-doc-fi/train-*
- split: validation
path: multieurlex-doc-fi/validation-*
- split: test
path: multieurlex-doc-fi/test-*
- config_name: multieurlex-doc-fr
data_files:
- split: train
path: multieurlex-doc-fr/train-*
- split: validation
path: multieurlex-doc-fr/validation-*
- split: test
path: multieurlex-doc-fr/test-*
- config_name: multieurlex-doc-hr
data_files:
- split: train
path: multieurlex-doc-hr/train-*
- split: validation
path: multieurlex-doc-hr/validation-*
- split: test
path: multieurlex-doc-hr/test-*
- config_name: multieurlex-doc-hu
data_files:
- split: train
path: multieurlex-doc-hu/train-*
- split: validation
path: multieurlex-doc-hu/validation-*
- split: test
path: multieurlex-doc-hu/test-*
- config_name: multieurlex-doc-it
data_files:
- split: train
path: multieurlex-doc-it/train-*
- split: validation
path: multieurlex-doc-it/validation-*
- split: test
path: multieurlex-doc-it/test-*
- config_name: multieurlex-doc-nl
data_files:
- split: train
path: multieurlex-doc-nl/train-*
- split: validation
path: multieurlex-doc-nl/validation-*
- split: test
path: multieurlex-doc-nl/test-*
- config_name: multieurlex-doc-pl
data_files:
- split: train
path: multieurlex-doc-pl/train-*
- split: validation
path: multieurlex-doc-pl/validation-*
- split: test
path: multieurlex-doc-pl/test-*
- config_name: multieurlex-doc-pt
data_files:
- split: train
path: multieurlex-doc-pt/train-*
- split: validation
path: multieurlex-doc-pt/validation-*
- split: test
path: multieurlex-doc-pt/test-*
- config_name: multieurlex-doc-ro
data_files:
- split: train
path: multieurlex-doc-ro/train-*
- split: validation
path: multieurlex-doc-ro/validation-*
- split: test
path: multieurlex-doc-ro/test-*
- config_name: multieurlex-doc-sv
data_files:
- split: train
path: multieurlex-doc-sv/train-*
- split: validation
path: multieurlex-doc-sv/validation-*
- split: test
path: multieurlex-doc-sv/test-*
- config_name: wiki-doc-ar-img
data_files:
- split: train
path: wiki-doc-ar-img/train-*
- split: test
path: wiki-doc-ar-img/test-*
- split: validation
path: wiki-doc-ar-img/validation-*
- config_name: wiki-doc-ar-merged
data_files:
- split: train
path: wiki-doc-ar-merged/train-*
- split: test
path: wiki-doc-ar-merged/test-*
- split: validation
path: wiki-doc-ar-merged/validation-*
- config_name: wiki-doc-de-merged
data_files:
- split: train
path: wiki-doc-de-merged/train-*
- split: validation
path: wiki-doc-de-merged/validation-*
- split: test
path: wiki-doc-de-merged/test-*
- config_name: wiki-doc-en-merged
data_files:
- split: train
path: wiki-doc-en-merged/train-*
- split: validation
path: wiki-doc-en-merged/validation-*
- split: test
path: wiki-doc-en-merged/test-*
- config_name: wiki-doc-es-merged
data_files:
- split: train
path: wiki-doc-es-merged/train-*
- split: validation
path: wiki-doc-es-merged/validation-*
- split: test
path: wiki-doc-es-merged/test-*
- config_name: wiki-doc-fr-merged
data_files:
- split: train
path: wiki-doc-fr-merged/train-*
- split: validation
path: wiki-doc-fr-merged/validation-*
- split: test
path: wiki-doc-fr-merged/test-*
- config_name: wiki-doc-it-merged
data_files:
- split: train
path: wiki-doc-it-merged/train-*
- split: validation
path: wiki-doc-it-merged/validation-*
- split: test
path: wiki-doc-it-merged/test-*
- config_name: wiki-doc-ja-img
data_files:
- split: train
path: wiki-doc-ja-img/train-*
- split: test
path: wiki-doc-ja-img/test-*
- split: validation
path: wiki-doc-ja-img/validation-*
- config_name: wiki-doc-ja-merged
data_files:
- split: train
path: wiki-doc-ja-merged/train-*
- split: validation
path: wiki-doc-ja-merged/validation-*
- split: test
path: wiki-doc-ja-merged/test-*
- config_name: wiki-doc-pt-img
data_files:
- split: train
path: wiki-doc-pt-img/train-*
- split: test
path: wiki-doc-pt-img/test-*
- split: validation
path: wiki-doc-pt-img/validation-*
- config_name: wiki-doc-pt-merged
data_files:
- split: train
path: wiki-doc-pt-merged/train-*
- split: validation
path: wiki-doc-pt-merged/validation-*
- split: test
path: wiki-doc-pt-merged/test-*
- config_name: wiki-doc-pt-merged-v2
data_files:
- split: train
path: wiki-doc-pt-merged-v2/train-*
- split: validation
path: wiki-doc-pt-merged-v2/validation-*
- split: test
path: wiki-doc-pt-merged-v2/test-*
- config_name: wiki-doc-zh-img
data_files:
- split: train
path: wiki-doc-zh-img/train-*
- split: test
path: wiki-doc-zh-img/test-*
- split: validation
path: wiki-doc-zh-img/validation-*
- config_name: wiki-doc-zh-merged
data_files:
- split: train
path: wiki-doc-zh-merged/train-*
- split: test
path: wiki-doc-zh-merged/test-*
- split: validation
path: wiki-doc-zh-merged/validation-*
---
## Additional Information
To load the dataset,
```
import datasets
ds = datasets.load_dataset("AmazonScience/MultilingualMultiModalClassification", data_dir="wiki-doc-ar-merged")
print(ds)
DatasetDict({
train: Dataset({
features: ['image', 'filename', 'words', 'ocr_bboxes', 'label'],
num_rows: 8129
})
validation: Dataset({
features: ['image', 'filename', 'words', 'ocr_bboxes', 'label'],
num_rows: 1742
})
test: Dataset({
features: ['image', 'filename', 'words', 'ocr_bboxes', 'label'],
num_rows: 1743
})
})
# In case you encountered `NonMatchingSplitsSizesError`, try out the following:
# from datasets import Image, Value, Sequence, ClassLabel, Features
# features = Features({'image': Image(mode=None, decode=True, id=None), 'filename': Value(dtype='string', id=None), 'words': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ocr_bboxes': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'label': ClassLabel(names=['AcademicJournal', 'AdultActor', 'Album', 'AmateurBoxer', 'Ambassador', 'AmericanFootballPlayer', 'Amphibian', 'AnimangaCharacter', 'Anime', 'Arachnid', 'Baronet', 'BasketballTeam', 'BeautyQueen', 'BroadcastNetwork', 'BusCompany', 'BusinessPerson', 'CanadianFootballTeam', 'Canal', 'Cardinal', 'Cave', 'ChristianBishop', 'ClassicalMusicArtist', 'ClassicalMusicComposition', 'CollegeCoach', 'Comedian', 'ComicsCreator', 'Congressman', 'Conifer', 'Convention', 'Cricketer', 'Crustacean', 'CultivatedVariety', 'Cycad', 'Dam', 'Economist', 'Engineer', 'Entomologist', 'EurovisionSongContestEntry', 'Fern', 'FilmFestival', 'Fish', 'FootballMatch', 'Glacier', 'GolfTournament', 'Governor', 'Gymnast', 'Historian', 'IceHockeyLeague', 'Insect', 'Journalist', 'Judge', 'Lighthouse', 'Magazine', 'Mayor', 'Medician', 'MemberOfParliament', 'MilitaryPerson', 'Model', 'Mollusca', 'Monarch', 'Moss', 'Mountain', 'MountainPass', 'MountainRange', 'MusicFestival', 'Musical', 'MythologicalFigure', 'Newspaper', 'Noble', 'OfficeHolder', 'Other', 'Philosopher', 'Photographer', 'PlayboyPlaymate', 'Poem', 'Poet', 'Pope', 'President', 'PrimeMinister', 'PublicTransitSystem', 'Racecourse', 'RadioHost', 'RadioStation', 'Religious', 'Reptile', 'Restaurant', 'Road', 'RoadTunnel', 'RollerCoaster', 'RugbyClub', 'RugbyLeague', 'Saint', 'School', 'ScreenWriter', 'Senator', 'ShoppingMall', 'Skater', 'SoccerLeague', 'SoccerManager', 'SoccerPlayer', 'SoccerTournament', 'SportsTeamMember', 'SumoWrestler', 'TelevisionStation', 'TennisTournament', 'TradeUnion', 'University', 'Village', 'VoiceActor', 'Volcano', 'WrestlingEvent'], id=None)})
# ds = datasets.load_dataset("AmazonScience/MultilingualMultiModalClassification", data_dir="wiki-doc-ar-merged", features=features, verification_mode="no_checks")
```
### Licensing Information
#### Wiki
Each image is licensed under original provider.
Any additional work provided by current work is provided under CC-BY-SA-4.0 following the Wikipedia license.
#### MultiEURLEX
We provide MultiEURLEX with the same licensing as the original EU data (CC-BY-4.0):
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
```
@inproceedings{fujinuma-etal-2023-multi,
title = "A Multi-Modal Multilingual Benchmark for Document Image Classification",
author = "Fujinuma, Yoshinari and
Varia, Siddharth and
Sankaran, Nishant and
Appalaraju, Srikar and
Min, Bonan and
Vyas, Yogarshi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.958",
doi = "10.18653/v1/2023.findings-emnlp.958",
pages = "14361--14376",
abstract = "Document image classification is different from plain-text document classification and consists of classifying a document by understanding the content and structure of documents such as forms, emails, and other such documents. We show that the only existing dataset for this task (Lewis et al., 2006) has several limitations and we introduce two newly curated multilingual datasets WIKI-DOC and MULTIEURLEX-DOC that overcome these limitations. We further undertake a comprehensive study of popular visually-rich document understanding or Document AI models in previously untested setting in document image classification such as 1) multi-label classification, and 2) zero-shot cross-lingual transfer setup. Experimental results show limitations of multilingual Document AI models on cross-lingual transfer across typologically distant languages. Our datasets and findings open the door for future research into improving Document AI models.",
}
``` |
flax-sentence-embeddings/stackexchange_titlebody_best_and_down_voted_answer_jsonl | flax-sentence-embeddings | "2022-07-11T13:13:18Z" | 11,296 | 11 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
pretty_name: stackexchange
size_categories:
- unknown
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)s
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [stackexchange](https://archive.org/details/stackexchange)
- **Repository:** [flax-sentence-embeddings](https://github.com/nreimers/flax-sentence-embeddings)
### Dataset Summary
We automatically extracted question and answer (Q&A) pairs from [Stack Exchange](https://stackexchange.com/) network. Stack Exchange gather many Q&A communities across 50 online plateform, including the well known Stack Overflow and other technical sites. 100 millon developpers consult Stack Exchange every month. The dataset is a parallel corpus with each question mapped to the top rated answer. The dataset is split given communities which cover a variety of domains from 3d printing, economics, raspberry pi or emacs. An exhaustive list of all communities is available [here](https://stackexchange.com/sites).
### Languages
Stack Exchange mainly consist of english language (en).
## Dataset Structure
### Data Instances
Each data samples is presented as follow:
```
{'title_body': "Is there a Stack Exchange icon available? StackAuth /sites route provides all the site's icons except for the one of the Stack Exchange master site.\nCould you please provide it in some way (a static SVG would be good)?",
'upvoted_answer': 'Here it is!\n\nDead link: SVG version here\nNote: the same restrictions on this trademarked icon that apply here, also apply to the icon above.',
'downvoted_answer': 'No, the /sites route is not the right place for that.\n\n/sites enumerates all websites that expose API end-points. StackExchange.com does not expose such an endpoint, so it does not (and will not) appear in the results.'}
```
This particular exampe corresponds to the [following page](https://stackapps.com/questions/1508/is-there-a-stack-exchange-icon-available)
### Data Fields
The fields present in the dataset contain the following informations:
- `title_body`: This is the concatenation of the title and body from the question
- `upvoted_answer`: This is the body from the most upvoted answer
- `downvoted_answer`: This is the body from the most downvoted answer
### Data Splits
We provide multiple splits for this dataset, which each refers to a given community channel. We detail the number of pail for each split below:
| | Number of pairs |
| ----- | ------ |
| english | 13,003 |
| academia | 2,465 |
| christianity | 1,502 |
| apple | 6,696 |
| electronics | 4,014 |
| gaming | 7,321 |
| askubuntu | 9,975 |
| ell | 4,438 |
| hermeneutics | 1,719 |
| judaism | 2,216 |
| diy | 2,037 |
| law | 1,297 |
| history | 1,099 |
| islam | 2,037 |
| dba | 2,502 |
| cooking | 2,064 |
| gamedev | 1,598 |
| drupal | 1,714 |
| chemistry | 1,523 |
| android | 2,830 |
| mathoverflow | 1,109 |
| magento | 1,849 |
| buddhism | 770 |
| gis | 1,843 |
| graphicdesign | 1,565 |
| codereview | 666 |
| aviation | 903 |
| bicycles | 984 |
| japanese | 1,124 |
| cs | 936 |
| german | 1,047 |
| interpersonal | 469 |
| biology | 832 |
| bitcoin | 1,068 |
| blender | 1,312 |
| crypto | 595 |
| anime | 802 |
| boardgames | 691 |
| hinduism | 343 |
| french | 632 |
| fitness | 567 |
| economics | 441 |
| chinese | 611 |
| codegolf | 333 |
| linguistics | 442 |
| astronomy | 371 |
| arduino | 595 |
| chess | 402 |
| cstheory | 314 |
| ja | 328 |
| martialarts | 254 |
| mathematica | 262 |
| dsp | 387 |
| ethereum | 479 |
| health | 299 |
| cogsci | 221 |
| earthscience | 229 |
| gardening | 210 |
| datascience | 325 |
| literature | 191 |
| matheducators | 177 |
| lifehacks | 316 |
| engineering | 227 |
| ham | 158 |
| 3dprinting | 109 |
| italian | 181 |
| emacs | 188 |
| homebrew | 176 |
| ai | 130 |
| avp | 152 |
| expatriates | 132 |
| elementaryos | 224 |
| cseducators | 67 |
| hsm | 70 |
| expressionengine | 91 |
| joomla | 124 |
| freelancing | 70 |
| crafts | 72 |
| genealogy | 86 |
| latin | 55 |
| hardwarerecs | 58 |
| devops | 53 |
| coffee | 47 |
| beer | 57 |
| languagelearning | 42 |
| ebooks | 54 |
| bricks | 79 |
| civicrm | 85 |
| bioinformatics | 39 |
| esperanto | 56 |
| computergraphics | 30 |
| conlang | 8 |
| korean | 28 |
| iota | 31 |
| eosio | 44 |
| craftcms | 26 |
| iot | 10 |
| drones | 6 |
| cardano | 7 |
| materials | 1 |
| ru | 6,305 |
| softwareengineering | 4,238 |
| scifi | 5,176 |
| workplace | 4,317 |
| serverfault | 7,969 |
| rpg | 4,212 |
| physics | 8,362 |
| superuser | 17,425 |
| worldbuilding | 2,087 |
| security | 3,069 |
| pt | 3,718 |
| unix | 6,173 |
| meta | 61 |
| politics | 1,468 |
| stats | 2,238 |
| movies | 1,577 |
| photo | 1,432 |
| wordpress | 3,046 |
| music | 1,228 |
| philosophy | 1,184 |
| skeptics | 670 |
| money | 1,905 |
| salesforce | 1,781 |
| parenting | 624 |
| raspberrypi | 1,011 |
| travel | 1,317 |
| mechanics | 842 |
| tex | 1,095 |
| ux | 1,107 |
| sharepoint | 1,691 |
| webapps | 1,906 |
| puzzling | 784 |
| networkengineering | 476 |
| webmasters | 854 |
| sports | 455 |
| rus | 514 |
| space | 405 |
| writers | 407 |
| pets | 322 |
| pm | 241 |
| russian | 353 |
| spanish | 366 |
| sound | 365 |
| quant | 340 |
| sqa | 353 |
| outdoors | 221 |
| softwarerecs | 348 |
| retrocomputing | 135 |
| mythology | 103 |
| portuguese | 144 |
| opensource | 123 |
| scicomp | 127 |
| ukrainian | 87 |
| patents | 137 |
| sustainability | 152 |
| poker | 115 |
| robotics | 110 |
| woodworking | 93 |
| reverseengineering | 97 |
| sitecore | 122 |
| tor | 137 |
| vi | 95 |
| windowsphone | 153 |
| vegetarianism | 35 |
| moderators | 23 |
| quantumcomputing | 46 |
| musicfans | 78 |
| tridion | 68 |
| opendata | 45 |
| tezos | 11 |
| stellar | 3 |
| or | 13 |
| monero | 26 |
| stackapps | 15 |
| total | 210,748 |
## Dataset Creation
### Curation Rationale
We primary designed this dataset for sentence embeddings training. Indeed sentence embeddings may be trained using a contrastive learning setup for which the model is trained to associate each sentence with its corresponding pair out of multiple proposition. Such models require many examples to be efficient and thus the dataset creation may be tedious. Community networks such as Stack Exchange allow us to build many examples semi-automatically.
### Source Data
The source data are dumps from [Stack Exchange](https://archive.org/details/stackexchange)
#### Initial Data Collection and Normalization
We collected the data from the math community.
We filtered out questions which title or body length is bellow 20 characters and questions for which body length is above 4096 characters.
When extracting most upvoted answer, we filtered to pairs for which their is at least 100 votes gap between most upvoted and downvoted answers.
#### Who are the source language producers?
Questions and answers are written by the community developpers of Stack Exchange.
## Additional Information
### Licensing Information
Please see the license information at: https://archive.org/details/stackexchange
### Citation Information
```
@misc{StackExchangeDataset,
author = {Flax Sentence Embeddings Team},
title = {Stack Exchange question pairs},
year = {2021},
howpublished = {https://huggingface.co/datasets/flax-sentence-embeddings/},
}
```
### Contributions
Thanks to the Flax Sentence Embeddings team for adding this dataset. |
EleutherAI/proof-pile-2 | EleutherAI | "2023-10-25T06:16:04Z" | 11,279 | 197 | [
"task_categories:text-generation",
"language:en",
"size_categories:10B<n<100B",
"arxiv:2310.10631",
"arxiv:2310.06786",
"region:us",
"math"
] | [
"text-generation"
] | "2023-10-12T00:11:33Z" | ---
task_categories:
- text-generation
language:
- en
tags:
- math
size_categories:
- 10B<n<100B
---
<img src="proofpile_logo.jpg" width="500">
[ArXiv](http://arxiv.org/abs/2310.10631) | [Models](https://huggingface.co/EleutherAI/llemma_34b) | [Data](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | [Code](https://github.com/EleutherAI/math-lm) | [Blog](https://blog.eleuther.ai/llemma/) | [Sample Explorer](https://llemma-demo.github.io/)
[Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Hailey Schoelkopf](https://github.com/haileyschoelkopf), [Keiran Paster](https://keirp.com), [Marco Dos Santos](https://github.com/dsantosmarco), [Stephen McAleer](https://www.andrew.cmu.edu/user/smcaleer/), [Albert Q. Jiang](https://albertqjiang.github.io/), [Jia Deng](https://www.cs.princeton.edu/~jiadeng/), [Stella Biderman](https://www.stellabiderman.com/), [Sean Welleck](https://wellecks.com/)
The **Proof-Pile-2** is a 55 billion token dataset of mathematical and scientific documents. This dataset was created in order to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b) models. It consists of three subsets:
- `arxiv` (29B tokens): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
- `open-web-math` (15B tokens): The [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) dataset, which contains much of the high-quality mathematical text from the internet.
- `algebraic-stack` (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics.
You can download the dataset as follows
```python
from datasets import load_dataset
ds = load_dataset("EleutherAI/proof-pile-2")
# To load only a specific subset, pass it as an argument, e.g
ds_arxiv = load_dataset("EleutherAI/proof-pile-2", "arxiv")
```
### Schema
Each dataset row has the following structure
```python
{
"text": ..., # document text
"meta": ..., # JSON string of metadata, schema specific to data source
}
```
### Dataset Contents
For detailed documentation of the ArXiv and web subsets, refer to [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math). The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.
| Language | AlgebraicStack tokens |
|-----------|-----------------------|
| Agda | 35.2 M |
| C | 25.1 M |
| C++ | 954.1 M |
| Coq | 281.9 M |
| Fortran | 724.9 M |
| GAP | 3.6 M |
| Haskell | 9.1 M |
| Idris | 10.9 M |
| Isabelle | 1,089.7 M |
| Julia | 531.0 M |
| Jupyter | 199.1 M |
| Lean | 285.6 M |
| Maple | 2.0 M |
| Matlab | 65.8 M |
| Python | 6,098.8 M |
| R | 71.3 M |
| Tex | 567.7 M |
| **Total** | **10,955.7 M** |
### License
We do not alter the license of any of the underlying data.
### Version History
**v1.1.0**: Contains an updated version of OpenWebMath, precisely the one available at [open-web-math/open-web-math](https://huggingface.co/datasets/open-web-math/open-web-math). This version of OpenWebMath has slightly improved filtering, for example, removal of very short documents.
**v1.0.0**: The data used to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b). Uses a development version of OpenWebMath.
### Citation
For the entire Proof-Pile-2, cite
```
@misc{azerbayev2023llemma,
title={Llemma: An Open Language Model For Mathematics},
author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck},
year={2023},
eprint={2310.10631},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
For the ArXiv subset, cite
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
For OpenWebMath, cite
```
@misc{paster2023openwebmath,
title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text},
author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
year={2023},
eprint={2310.06786},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
open-llm-leaderboard-old/details_TencentARC__LLaMA-Pro-8B-Instruct | open-llm-leaderboard-old | "2024-01-06T13:14:36Z" | 11,250 | 0 | [
"region:us"
] | null | "2024-01-06T05:38:44Z" | ---
pretty_name: Evaluation run of TencentARC/LLaMA-Pro-8B-Instruct
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TencentARC/LLaMA-Pro-8B-Instruct](https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 8 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TencentARC__LLaMA-Pro-8B-Instruct\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-06T13:12:05.796061](https://huggingface.co/datasets/open-llm-leaderboard/details_TencentARC__LLaMA-Pro-8B-Instruct/blob/main/results_2024-01-06T13-12-05.796061.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5281709550040744,\n\
\ \"acc_stderr\": 0.034190129304935035,\n \"acc_norm\": 0.5299752077852407,\n\
\ \"acc_norm_stderr\": 0.03489132244520177,\n \"mc1\": 0.3353733170134639,\n\
\ \"mc1_stderr\": 0.01652753403966899,\n \"mc2\": 0.4942677553605431,\n\
\ \"mc2_stderr\": 0.015656020272217592\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.014611390804670088,\n \
\ \"acc_norm\": 0.5298634812286689,\n \"acc_norm_stderr\": 0.014585305840007105\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5853415654252141,\n\
\ \"acc_stderr\": 0.0049165612135912825,\n \"acc_norm\": 0.7697669786895041,\n\
\ \"acc_norm_stderr\": 0.004201215520808244\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4666666666666667,\n\
\ \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.4666666666666667,\n\
\ \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5197368421052632,\n \"acc_stderr\": 0.040657710025626036,\n\
\ \"acc_norm\": 0.5197368421052632,\n \"acc_norm_stderr\": 0.040657710025626036\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.48,\n\
\ \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5433962264150943,\n \"acc_stderr\": 0.03065674869673943,\n\
\ \"acc_norm\": 0.5433962264150943,\n \"acc_norm_stderr\": 0.03065674869673943\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04181210050035455,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04181210050035455\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n\
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4161849710982659,\n\
\ \"acc_stderr\": 0.03758517775404948,\n \"acc_norm\": 0.4161849710982659,\n\
\ \"acc_norm_stderr\": 0.03758517775404948\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.04336432707993177,\n\
\ \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.04336432707993177\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n\
\ \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4553191489361702,\n \"acc_stderr\": 0.03255525359340354,\n\
\ \"acc_norm\": 0.4553191489361702,\n \"acc_norm_stderr\": 0.03255525359340354\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2807017543859649,\n\
\ \"acc_stderr\": 0.042270544512322004,\n \"acc_norm\": 0.2807017543859649,\n\
\ \"acc_norm_stderr\": 0.042270544512322004\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.47586206896551725,\n \"acc_stderr\": 0.041618085035015295,\n\
\ \"acc_norm\": 0.47586206896551725,\n \"acc_norm_stderr\": 0.041618085035015295\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3544973544973545,\n \"acc_stderr\": 0.024636830602841997,\n \"\
acc_norm\": 0.3544973544973545,\n \"acc_norm_stderr\": 0.024636830602841997\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n\
\ \"acc_stderr\": 0.043435254289490965,\n \"acc_norm\": 0.38095238095238093,\n\
\ \"acc_norm_stderr\": 0.043435254289490965\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5903225806451613,\n\
\ \"acc_stderr\": 0.027976054915347357,\n \"acc_norm\": 0.5903225806451613,\n\
\ \"acc_norm_stderr\": 0.027976054915347357\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.3891625615763547,\n \"acc_stderr\": 0.034304624161038716,\n\
\ \"acc_norm\": 0.3891625615763547,\n \"acc_norm_stderr\": 0.034304624161038716\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\"\
: 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.696969696969697,\n \"acc_stderr\": 0.035886248000917075,\n\
\ \"acc_norm\": 0.696969696969697,\n \"acc_norm_stderr\": 0.035886248000917075\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6212121212121212,\n \"acc_stderr\": 0.03456088731993747,\n \"\
acc_norm\": 0.6212121212121212,\n \"acc_norm_stderr\": 0.03456088731993747\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.772020725388601,\n \"acc_stderr\": 0.030276909945178263,\n\
\ \"acc_norm\": 0.772020725388601,\n \"acc_norm_stderr\": 0.030276909945178263\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.48205128205128206,\n \"acc_stderr\": 0.025334667080954942,\n\
\ \"acc_norm\": 0.48205128205128206,\n \"acc_norm_stderr\": 0.025334667080954942\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.28888888888888886,\n \"acc_stderr\": 0.027634907264178544,\n \
\ \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.027634907264178544\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5042016806722689,\n \"acc_stderr\": 0.0324773433444811,\n \
\ \"acc_norm\": 0.5042016806722689,\n \"acc_norm_stderr\": 0.0324773433444811\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2913907284768212,\n \"acc_stderr\": 0.037101857261199946,\n \"\
acc_norm\": 0.2913907284768212,\n \"acc_norm_stderr\": 0.037101857261199946\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7192660550458716,\n \"acc_stderr\": 0.019266055045871616,\n \"\
acc_norm\": 0.7192660550458716,\n \"acc_norm_stderr\": 0.019266055045871616\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.42592592592592593,\n \"acc_stderr\": 0.03372343271653063,\n \"\
acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.03372343271653063\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7205882352941176,\n \"acc_stderr\": 0.031493281045079556,\n \"\
acc_norm\": 0.7205882352941176,\n \"acc_norm_stderr\": 0.031493281045079556\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7172995780590717,\n \"acc_stderr\": 0.029312814153955924,\n \
\ \"acc_norm\": 0.7172995780590717,\n \"acc_norm_stderr\": 0.029312814153955924\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5739910313901345,\n\
\ \"acc_stderr\": 0.033188332862172806,\n \"acc_norm\": 0.5739910313901345,\n\
\ \"acc_norm_stderr\": 0.033188332862172806\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.0418644516301375,\n\
\ \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.0418644516301375\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6115702479338843,\n \"acc_stderr\": 0.044492703500683836,\n \"\
acc_norm\": 0.6115702479338843,\n \"acc_norm_stderr\": 0.044492703500683836\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5925925925925926,\n\
\ \"acc_stderr\": 0.04750077341199984,\n \"acc_norm\": 0.5925925925925926,\n\
\ \"acc_norm_stderr\": 0.04750077341199984\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6257668711656442,\n \"acc_stderr\": 0.03802068102899615,\n\
\ \"acc_norm\": 0.6257668711656442,\n \"acc_norm_stderr\": 0.03802068102899615\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n\
\ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n\
\ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6990291262135923,\n \"acc_stderr\": 0.04541609446503948,\n\
\ \"acc_norm\": 0.6990291262135923,\n \"acc_norm_stderr\": 0.04541609446503948\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7863247863247863,\n\
\ \"acc_stderr\": 0.02685345037700916,\n \"acc_norm\": 0.7863247863247863,\n\
\ \"acc_norm_stderr\": 0.02685345037700916\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695238,\n \
\ \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695238\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7113665389527458,\n\
\ \"acc_stderr\": 0.016203792703197797,\n \"acc_norm\": 0.7113665389527458,\n\
\ \"acc_norm_stderr\": 0.016203792703197797\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5867052023121387,\n \"acc_stderr\": 0.02651126136940924,\n\
\ \"acc_norm\": 0.5867052023121387,\n \"acc_norm_stderr\": 0.02651126136940924\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.32625698324022345,\n\
\ \"acc_stderr\": 0.01568044151888918,\n \"acc_norm\": 0.32625698324022345,\n\
\ \"acc_norm_stderr\": 0.01568044151888918\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.565359477124183,\n \"acc_stderr\": 0.028384256704883037,\n\
\ \"acc_norm\": 0.565359477124183,\n \"acc_norm_stderr\": 0.028384256704883037\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5852090032154341,\n\
\ \"acc_stderr\": 0.027982680459759563,\n \"acc_norm\": 0.5852090032154341,\n\
\ \"acc_norm_stderr\": 0.027982680459759563\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.027648477877413327,\n\
\ \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.027648477877413327\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.37943262411347517,\n \"acc_stderr\": 0.02894733885161411,\n \
\ \"acc_norm\": 0.37943262411347517,\n \"acc_norm_stderr\": 0.02894733885161411\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3754889178617992,\n\
\ \"acc_stderr\": 0.012367945396728208,\n \"acc_norm\": 0.3754889178617992,\n\
\ \"acc_norm_stderr\": 0.012367945396728208\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.45588235294117646,\n \"acc_stderr\": 0.030254372573976687,\n\
\ \"acc_norm\": 0.45588235294117646,\n \"acc_norm_stderr\": 0.030254372573976687\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.49836601307189543,\n \"acc_stderr\": 0.020227726838150117,\n \
\ \"acc_norm\": 0.49836601307189543,\n \"acc_norm_stderr\": 0.020227726838150117\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n\
\ \"acc_stderr\": 0.046075820907199756,\n \"acc_norm\": 0.6363636363636364,\n\
\ \"acc_norm_stderr\": 0.046075820907199756\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6448979591836734,\n \"acc_stderr\": 0.030635655150387638,\n\
\ \"acc_norm\": 0.6448979591836734,\n \"acc_norm_stderr\": 0.030635655150387638\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6915422885572139,\n\
\ \"acc_stderr\": 0.032658195885126966,\n \"acc_norm\": 0.6915422885572139,\n\
\ \"acc_norm_stderr\": 0.032658195885126966\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.463855421686747,\n\
\ \"acc_stderr\": 0.03882310850890594,\n \"acc_norm\": 0.463855421686747,\n\
\ \"acc_norm_stderr\": 0.03882310850890594\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7426900584795322,\n \"acc_stderr\": 0.03352799844161865,\n\
\ \"acc_norm\": 0.7426900584795322,\n \"acc_norm_stderr\": 0.03352799844161865\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3353733170134639,\n\
\ \"mc1_stderr\": 0.01652753403966899,\n \"mc2\": 0.4942677553605431,\n\
\ \"mc2_stderr\": 0.015656020272217592\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7221783741120757,\n \"acc_stderr\": 0.012588918183871593\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.44200151630022744,\n \
\ \"acc_stderr\": 0.013679514492814581\n }\n}\n```"
repo_url: https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: [email protected]
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|arc:challenge|25_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|arc:challenge|25_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|arc:challenge|25_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|arc:challenge|25_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|arc:challenge|25_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|arc:challenge|25_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|arc:challenge|25_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|arc:challenge|25_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|gsm8k|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|gsm8k|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|gsm8k|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|gsm8k|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|gsm8k|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|gsm8k|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|gsm8k|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|gsm8k|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hellaswag|10_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hellaswag|10_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hellaswag|10_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hellaswag|10_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hellaswag|10_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hellaswag|10_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hellaswag|10_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hellaswag|10_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T05-36-22.722674.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T06-15-48.429229.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T06-43-15.789213.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T09-13-09.739975.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T09-16-27.017995.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T11-33-07.175402.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T13-05-18.668611.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T13-12-05.796061.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T13-12-05.796061.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- '**/details_harness|winogrande|5_2024-01-06T05-36-22.722674.parquet'
- split: 2024_01_06T06_15_48.429229
path:
- '**/details_harness|winogrande|5_2024-01-06T06-15-48.429229.parquet'
- split: 2024_01_06T06_43_15.789213
path:
- '**/details_harness|winogrande|5_2024-01-06T06-43-15.789213.parquet'
- split: 2024_01_06T09_13_09.739975
path:
- '**/details_harness|winogrande|5_2024-01-06T09-13-09.739975.parquet'
- split: 2024_01_06T09_16_27.017995
path:
- '**/details_harness|winogrande|5_2024-01-06T09-16-27.017995.parquet'
- split: 2024_01_06T11_33_07.175402
path:
- '**/details_harness|winogrande|5_2024-01-06T11-33-07.175402.parquet'
- split: 2024_01_06T13_05_18.668611
path:
- '**/details_harness|winogrande|5_2024-01-06T13-05-18.668611.parquet'
- split: 2024_01_06T13_12_05.796061
path:
- '**/details_harness|winogrande|5_2024-01-06T13-12-05.796061.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-06T13-12-05.796061.parquet'
- config_name: results
data_files:
- split: 2024_01_06T05_36_22.722674
path:
- results_2024-01-06T05-36-22.722674.parquet
- split: 2024_01_06T06_15_48.429229
path:
- results_2024-01-06T06-15-48.429229.parquet
- split: 2024_01_06T06_43_15.789213
path:
- results_2024-01-06T06-43-15.789213.parquet
- split: 2024_01_06T09_13_09.739975
path:
- results_2024-01-06T09-13-09.739975.parquet
- split: 2024_01_06T09_16_27.017995
path:
- results_2024-01-06T09-16-27.017995.parquet
- split: 2024_01_06T11_33_07.175402
path:
- results_2024-01-06T11-33-07.175402.parquet
- split: 2024_01_06T13_05_18.668611
path:
- results_2024-01-06T13-05-18.668611.parquet
- split: 2024_01_06T13_12_05.796061
path:
- results_2024-01-06T13-12-05.796061.parquet
- split: latest
path:
- results_2024-01-06T13-12-05.796061.parquet
---
# Dataset Card for Evaluation run of TencentARC/LLaMA-Pro-8B-Instruct
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [TencentARC/LLaMA-Pro-8B-Instruct](https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TencentARC__LLaMA-Pro-8B-Instruct",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-06T13:12:05.796061](https://huggingface.co/datasets/open-llm-leaderboard/details_TencentARC__LLaMA-Pro-8B-Instruct/blob/main/results_2024-01-06T13-12-05.796061.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5281709550040744,
"acc_stderr": 0.034190129304935035,
"acc_norm": 0.5299752077852407,
"acc_norm_stderr": 0.03489132244520177,
"mc1": 0.3353733170134639,
"mc1_stderr": 0.01652753403966899,
"mc2": 0.4942677553605431,
"mc2_stderr": 0.015656020272217592
},
"harness|arc:challenge|25": {
"acc": 0.5,
"acc_stderr": 0.014611390804670088,
"acc_norm": 0.5298634812286689,
"acc_norm_stderr": 0.014585305840007105
},
"harness|hellaswag|10": {
"acc": 0.5853415654252141,
"acc_stderr": 0.0049165612135912825,
"acc_norm": 0.7697669786895041,
"acc_norm_stderr": 0.004201215520808244
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4666666666666667,
"acc_stderr": 0.043097329010363554,
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.043097329010363554
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5197368421052632,
"acc_stderr": 0.040657710025626036,
"acc_norm": 0.5197368421052632,
"acc_norm_stderr": 0.040657710025626036
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5433962264150943,
"acc_stderr": 0.03065674869673943,
"acc_norm": 0.5433962264150943,
"acc_norm_stderr": 0.03065674869673943
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5,
"acc_stderr": 0.04181210050035455,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04181210050035455
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4161849710982659,
"acc_stderr": 0.03758517775404948,
"acc_norm": 0.4161849710982659,
"acc_norm_stderr": 0.03758517775404948
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.04336432707993177,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.04336432707993177
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4553191489361702,
"acc_stderr": 0.03255525359340354,
"acc_norm": 0.4553191489361702,
"acc_norm_stderr": 0.03255525359340354
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2807017543859649,
"acc_stderr": 0.042270544512322004,
"acc_norm": 0.2807017543859649,
"acc_norm_stderr": 0.042270544512322004
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.47586206896551725,
"acc_stderr": 0.041618085035015295,
"acc_norm": 0.47586206896551725,
"acc_norm_stderr": 0.041618085035015295
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3544973544973545,
"acc_stderr": 0.024636830602841997,
"acc_norm": 0.3544973544973545,
"acc_norm_stderr": 0.024636830602841997
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.043435254289490965,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.043435254289490965
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5903225806451613,
"acc_stderr": 0.027976054915347357,
"acc_norm": 0.5903225806451613,
"acc_norm_stderr": 0.027976054915347357
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3891625615763547,
"acc_stderr": 0.034304624161038716,
"acc_norm": 0.3891625615763547,
"acc_norm_stderr": 0.034304624161038716
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.696969696969697,
"acc_stderr": 0.035886248000917075,
"acc_norm": 0.696969696969697,
"acc_norm_stderr": 0.035886248000917075
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6212121212121212,
"acc_stderr": 0.03456088731993747,
"acc_norm": 0.6212121212121212,
"acc_norm_stderr": 0.03456088731993747
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.772020725388601,
"acc_stderr": 0.030276909945178263,
"acc_norm": 0.772020725388601,
"acc_norm_stderr": 0.030276909945178263
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.48205128205128206,
"acc_stderr": 0.025334667080954942,
"acc_norm": 0.48205128205128206,
"acc_norm_stderr": 0.025334667080954942
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.027634907264178544,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.027634907264178544
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5042016806722689,
"acc_stderr": 0.0324773433444811,
"acc_norm": 0.5042016806722689,
"acc_norm_stderr": 0.0324773433444811
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2913907284768212,
"acc_stderr": 0.037101857261199946,
"acc_norm": 0.2913907284768212,
"acc_norm_stderr": 0.037101857261199946
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7192660550458716,
"acc_stderr": 0.019266055045871616,
"acc_norm": 0.7192660550458716,
"acc_norm_stderr": 0.019266055045871616
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.03372343271653063,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.03372343271653063
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7205882352941176,
"acc_stderr": 0.031493281045079556,
"acc_norm": 0.7205882352941176,
"acc_norm_stderr": 0.031493281045079556
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7172995780590717,
"acc_stderr": 0.029312814153955924,
"acc_norm": 0.7172995780590717,
"acc_norm_stderr": 0.029312814153955924
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5739910313901345,
"acc_stderr": 0.033188332862172806,
"acc_norm": 0.5739910313901345,
"acc_norm_stderr": 0.033188332862172806
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.648854961832061,
"acc_stderr": 0.0418644516301375,
"acc_norm": 0.648854961832061,
"acc_norm_stderr": 0.0418644516301375
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6115702479338843,
"acc_stderr": 0.044492703500683836,
"acc_norm": 0.6115702479338843,
"acc_norm_stderr": 0.044492703500683836
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.04750077341199984,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.04750077341199984
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6257668711656442,
"acc_stderr": 0.03802068102899615,
"acc_norm": 0.6257668711656442,
"acc_norm_stderr": 0.03802068102899615
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.6990291262135923,
"acc_stderr": 0.04541609446503948,
"acc_norm": 0.6990291262135923,
"acc_norm_stderr": 0.04541609446503948
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7863247863247863,
"acc_stderr": 0.02685345037700916,
"acc_norm": 0.7863247863247863,
"acc_norm_stderr": 0.02685345037700916
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695238,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695238
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7113665389527458,
"acc_stderr": 0.016203792703197797,
"acc_norm": 0.7113665389527458,
"acc_norm_stderr": 0.016203792703197797
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5867052023121387,
"acc_stderr": 0.02651126136940924,
"acc_norm": 0.5867052023121387,
"acc_norm_stderr": 0.02651126136940924
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.32625698324022345,
"acc_stderr": 0.01568044151888918,
"acc_norm": 0.32625698324022345,
"acc_norm_stderr": 0.01568044151888918
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.565359477124183,
"acc_stderr": 0.028384256704883037,
"acc_norm": 0.565359477124183,
"acc_norm_stderr": 0.028384256704883037
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5852090032154341,
"acc_stderr": 0.027982680459759563,
"acc_norm": 0.5852090032154341,
"acc_norm_stderr": 0.027982680459759563
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.027648477877413327,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.027648477877413327
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.37943262411347517,
"acc_stderr": 0.02894733885161411,
"acc_norm": 0.37943262411347517,
"acc_norm_stderr": 0.02894733885161411
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3754889178617992,
"acc_stderr": 0.012367945396728208,
"acc_norm": 0.3754889178617992,
"acc_norm_stderr": 0.012367945396728208
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.45588235294117646,
"acc_stderr": 0.030254372573976687,
"acc_norm": 0.45588235294117646,
"acc_norm_stderr": 0.030254372573976687
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.49836601307189543,
"acc_stderr": 0.020227726838150117,
"acc_norm": 0.49836601307189543,
"acc_norm_stderr": 0.020227726838150117
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.046075820907199756,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.046075820907199756
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6448979591836734,
"acc_stderr": 0.030635655150387638,
"acc_norm": 0.6448979591836734,
"acc_norm_stderr": 0.030635655150387638
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6915422885572139,
"acc_stderr": 0.032658195885126966,
"acc_norm": 0.6915422885572139,
"acc_norm_stderr": 0.032658195885126966
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-virology|5": {
"acc": 0.463855421686747,
"acc_stderr": 0.03882310850890594,
"acc_norm": 0.463855421686747,
"acc_norm_stderr": 0.03882310850890594
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7426900584795322,
"acc_stderr": 0.03352799844161865,
"acc_norm": 0.7426900584795322,
"acc_norm_stderr": 0.03352799844161865
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3353733170134639,
"mc1_stderr": 0.01652753403966899,
"mc2": 0.4942677553605431,
"mc2_stderr": 0.015656020272217592
},
"harness|winogrande|5": {
"acc": 0.7221783741120757,
"acc_stderr": 0.012588918183871593
},
"harness|gsm8k|5": {
"acc": 0.44200151630022744,
"acc_stderr": 0.013679514492814581
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
alvations/c4p0-x1-en-ja | alvations | "2024-03-24T03:55:23Z" | 11,250 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-23T09:54:37Z" | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 49764
num_examples: 42
download_size: 37636
dataset_size: 49764
configs:
- config_name: default
data_files:
- split: train
path: 66034f82c5c65ae4/train-*
---
|
lighteval/mmlu | lighteval | "2023-06-09T16:36:19Z" | 11,195 | 39 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2009.03300",
"arxiv:2005.00700",
"arxiv:2005.14165",
"arxiv:2008.02275",
"region:us"
] | [
"question-answering"
] | "2023-05-16T09:39:28Z" | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mmlu
pretty_name: Measuring Massive Multitask Language Understanding
language_bcp47:
- en-US
dataset_info:
- config_name: abstract_algebra
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 19328
num_examples: 100
- name: validation
num_bytes: 2024
num_examples: 11
- name: dev
num_bytes: 830
num_examples: 5
download_size: 166184960
dataset_size: 160623559
- config_name: anatomy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33121
num_examples: 135
- name: validation
num_bytes: 3140
num_examples: 14
- name: dev
num_bytes: 967
num_examples: 5
download_size: 166184960
dataset_size: 160638605
- config_name: astronomy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46771
num_examples: 152
- name: validation
num_bytes: 5027
num_examples: 16
- name: dev
num_bytes: 2076
num_examples: 5
download_size: 166184960
dataset_size: 160655251
- config_name: business_ethics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33252
num_examples: 100
- name: validation
num_bytes: 3038
num_examples: 11
- name: dev
num_bytes: 2190
num_examples: 5
download_size: 166184960
dataset_size: 160639857
- config_name: clinical_knowledge
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 62754
num_examples: 265
- name: validation
num_bytes: 6664
num_examples: 29
- name: dev
num_bytes: 1210
num_examples: 5
download_size: 166184960
dataset_size: 160672005
- config_name: college_biology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 48797
num_examples: 144
- name: validation
num_bytes: 4819
num_examples: 16
- name: dev
num_bytes: 1532
num_examples: 5
download_size: 166184960
dataset_size: 160656525
- config_name: college_chemistry
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 24708
num_examples: 100
- name: validation
num_bytes: 2328
num_examples: 8
- name: dev
num_bytes: 1331
num_examples: 5
download_size: 166184960
dataset_size: 160629744
- config_name: college_computer_science
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 42641
num_examples: 100
- name: validation
num_bytes: 4663
num_examples: 11
- name: dev
num_bytes: 2765
num_examples: 5
download_size: 166184960
dataset_size: 160651446
- config_name: college_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 24711
num_examples: 100
- name: validation
num_bytes: 2668
num_examples: 11
- name: dev
num_bytes: 1493
num_examples: 5
download_size: 166184960
dataset_size: 160630249
- config_name: college_medicine
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 82397
num_examples: 173
- name: validation
num_bytes: 7909
num_examples: 22
- name: dev
num_bytes: 1670
num_examples: 5
download_size: 166184960
dataset_size: 160693353
- config_name: college_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 30181
num_examples: 102
- name: validation
num_bytes: 3490
num_examples: 11
- name: dev
num_bytes: 1412
num_examples: 5
download_size: 166184960
dataset_size: 160636460
- config_name: computer_security
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 27124
num_examples: 100
- name: validation
num_bytes: 4549
num_examples: 11
- name: dev
num_bytes: 1101
num_examples: 5
download_size: 166184960
dataset_size: 160634151
- config_name: conceptual_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 40709
num_examples: 235
- name: validation
num_bytes: 4474
num_examples: 26
- name: dev
num_bytes: 934
num_examples: 5
download_size: 166184960
dataset_size: 160647494
- config_name: econometrics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46547
num_examples: 114
- name: validation
num_bytes: 4967
num_examples: 12
- name: dev
num_bytes: 1644
num_examples: 5
download_size: 166184960
dataset_size: 160654535
- config_name: electrical_engineering
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 25142
num_examples: 145
- name: validation
num_bytes: 2903
num_examples: 16
- name: dev
num_bytes: 972
num_examples: 5
download_size: 166184960
dataset_size: 160630394
- config_name: elementary_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 70108
num_examples: 378
- name: validation
num_bytes: 8988
num_examples: 41
- name: dev
num_bytes: 1440
num_examples: 5
download_size: 166184960
dataset_size: 160681913
- config_name: formal_logic
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 49785
num_examples: 126
- name: validation
num_bytes: 6252
num_examples: 14
- name: dev
num_bytes: 1757
num_examples: 5
download_size: 166184960
dataset_size: 160659171
- config_name: global_facts
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 18403
num_examples: 100
- name: validation
num_bytes: 1865
num_examples: 10
- name: dev
num_bytes: 1229
num_examples: 5
download_size: 166184960
dataset_size: 160622874
- config_name: high_school_biology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 109732
num_examples: 310
- name: validation
num_bytes: 11022
num_examples: 32
- name: dev
num_bytes: 1673
num_examples: 5
download_size: 166184960
dataset_size: 160723804
- config_name: high_school_chemistry
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 58464
num_examples: 203
- name: validation
num_bytes: 7092
num_examples: 22
- name: dev
num_bytes: 1220
num_examples: 5
download_size: 166184960
dataset_size: 160668153
- config_name: high_school_computer_science
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 44476
num_examples: 100
- name: validation
num_bytes: 3343
num_examples: 9
- name: dev
num_bytes: 2918
num_examples: 5
download_size: 166184960
dataset_size: 160652114
- config_name: high_school_european_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 270300
num_examples: 165
- name: validation
num_bytes: 29632
num_examples: 18
- name: dev
num_bytes: 11564
num_examples: 5
download_size: 166184960
dataset_size: 160912873
- config_name: high_school_geography
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 42034
num_examples: 198
- name: validation
num_bytes: 4332
num_examples: 22
- name: dev
num_bytes: 1403
num_examples: 5
download_size: 166184960
dataset_size: 160649146
- config_name: high_school_government_and_politics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 66074
num_examples: 193
- name: validation
num_bytes: 7063
num_examples: 21
- name: dev
num_bytes: 1779
num_examples: 5
download_size: 166184960
dataset_size: 160676293
- config_name: high_school_macroeconomics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 117687
num_examples: 390
- name: validation
num_bytes: 13020
num_examples: 43
- name: dev
num_bytes: 1328
num_examples: 5
download_size: 166184960
dataset_size: 160733412
- config_name: high_school_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 54854
num_examples: 270
- name: validation
num_bytes: 5765
num_examples: 29
- name: dev
num_bytes: 1297
num_examples: 5
download_size: 166184960
dataset_size: 160663293
- config_name: high_school_microeconomics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 75703
num_examples: 238
- name: validation
num_bytes: 7553
num_examples: 26
- name: dev
num_bytes: 1298
num_examples: 5
download_size: 166184960
dataset_size: 160685931
- config_name: high_school_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 59538
num_examples: 151
- name: validation
num_bytes: 6771
num_examples: 17
- name: dev
num_bytes: 1489
num_examples: 5
download_size: 166184960
dataset_size: 160669175
- config_name: high_school_psychology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 159407
num_examples: 545
- name: validation
num_bytes: 17269
num_examples: 60
- name: dev
num_bytes: 1905
num_examples: 5
download_size: 166184960
dataset_size: 160779958
- config_name: high_school_statistics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 110702
num_examples: 216
- name: validation
num_bytes: 9997
num_examples: 23
- name: dev
num_bytes: 2528
num_examples: 5
download_size: 166184960
dataset_size: 160724604
- config_name: high_school_us_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 296734
num_examples: 204
- name: validation
num_bytes: 31706
num_examples: 22
- name: dev
num_bytes: 8864
num_examples: 5
download_size: 166184960
dataset_size: 160938681
- config_name: high_school_world_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 378617
num_examples: 237
- name: validation
num_bytes: 45501
num_examples: 26
- name: dev
num_bytes: 4882
num_examples: 5
download_size: 166184960
dataset_size: 161030377
- config_name: human_aging
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46098
num_examples: 223
- name: validation
num_bytes: 4707
num_examples: 23
- name: dev
num_bytes: 1008
num_examples: 5
download_size: 166184960
dataset_size: 160653190
- config_name: human_sexuality
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 32110
num_examples: 131
- name: validation
num_bytes: 2421
num_examples: 12
- name: dev
num_bytes: 1077
num_examples: 5
download_size: 166184960
dataset_size: 160636985
- config_name: international_law
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 53531
num_examples: 121
- name: validation
num_bytes: 6473
num_examples: 13
- name: dev
num_bytes: 2418
num_examples: 5
download_size: 166184960
dataset_size: 160663799
- config_name: jurisprudence
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33986
num_examples: 108
- name: validation
num_bytes: 3729
num_examples: 11
- name: dev
num_bytes: 1303
num_examples: 5
download_size: 166184960
dataset_size: 160640395
- config_name: logical_fallacies
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 50117
num_examples: 163
- name: validation
num_bytes: 5103
num_examples: 18
- name: dev
num_bytes: 1573
num_examples: 5
download_size: 166184960
dataset_size: 160658170
- config_name: machine_learning
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33880
num_examples: 112
- name: validation
num_bytes: 3232
num_examples: 11
- name: dev
num_bytes: 2323
num_examples: 5
download_size: 166184960
dataset_size: 160640812
- config_name: management
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 20002
num_examples: 103
- name: validation
num_bytes: 1820
num_examples: 11
- name: dev
num_bytes: 898
num_examples: 5
download_size: 166184960
dataset_size: 160624097
- config_name: marketing
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 63025
num_examples: 234
- name: validation
num_bytes: 7394
num_examples: 25
- name: dev
num_bytes: 1481
num_examples: 5
download_size: 166184960
dataset_size: 160673277
- config_name: medical_genetics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 20864
num_examples: 100
- name: validation
num_bytes: 3005
num_examples: 11
- name: dev
num_bytes: 1089
num_examples: 5
download_size: 166184960
dataset_size: 160626335
- config_name: miscellaneous
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 147704
num_examples: 783
- name: validation
num_bytes: 14330
num_examples: 86
- name: dev
num_bytes: 699
num_examples: 5
download_size: 166184960
dataset_size: 160764110
- config_name: moral_disputes
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 107818
num_examples: 346
- name: validation
num_bytes: 12420
num_examples: 38
- name: dev
num_bytes: 1755
num_examples: 5
download_size: 166184960
dataset_size: 160723370
- config_name: moral_scenarios
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 374026
num_examples: 895
- name: validation
num_bytes: 42338
num_examples: 100
- name: dev
num_bytes: 2058
num_examples: 5
download_size: 166184960
dataset_size: 161019799
- config_name: nutrition
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 92410
num_examples: 306
- name: validation
num_bytes: 8436
num_examples: 33
- name: dev
num_bytes: 2085
num_examples: 5
download_size: 166184960
dataset_size: 160704308
- config_name: philosophy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 80073
num_examples: 311
- name: validation
num_bytes: 9184
num_examples: 34
- name: dev
num_bytes: 988
num_examples: 5
download_size: 166184960
dataset_size: 160691622
- config_name: prehistory
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 89594
num_examples: 324
- name: validation
num_bytes: 10285
num_examples: 35
- name: dev
num_bytes: 1878
num_examples: 5
download_size: 166184960
dataset_size: 160703134
- config_name: professional_accounting
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 124550
num_examples: 282
- name: validation
num_bytes: 14372
num_examples: 31
- name: dev
num_bytes: 2148
num_examples: 5
download_size: 166184960
dataset_size: 160742447
- config_name: professional_law
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 1891762
num_examples: 1534
- name: validation
num_bytes: 203519
num_examples: 170
- name: dev
num_bytes: 6610
num_examples: 5
download_size: 166184960
dataset_size: 162703268
- config_name: professional_medicine
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 217561
num_examples: 272
- name: validation
num_bytes: 23847
num_examples: 31
- name: dev
num_bytes: 3807
num_examples: 5
download_size: 166184960
dataset_size: 160846592
- config_name: professional_psychology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 225899
num_examples: 612
- name: validation
num_bytes: 29101
num_examples: 69
- name: dev
num_bytes: 2267
num_examples: 5
download_size: 166184960
dataset_size: 160858644
- config_name: public_relations
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 28760
num_examples: 110
- name: validation
num_bytes: 4566
num_examples: 12
- name: dev
num_bytes: 1496
num_examples: 5
download_size: 166184960
dataset_size: 160636199
- config_name: security_studies
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 204844
num_examples: 245
- name: validation
num_bytes: 22637
num_examples: 27
- name: dev
num_bytes: 5335
num_examples: 5
download_size: 166184960
dataset_size: 160834193
- config_name: sociology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 66243
num_examples: 201
- name: validation
num_bytes: 7184
num_examples: 22
- name: dev
num_bytes: 1613
num_examples: 5
download_size: 166184960
dataset_size: 160676417
- config_name: us_foreign_policy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 28443
num_examples: 100
- name: validation
num_bytes: 3264
num_examples: 11
- name: dev
num_bytes: 1611
num_examples: 5
download_size: 166184960
dataset_size: 160634695
- config_name: virology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 38759
num_examples: 166
- name: validation
num_bytes: 5463
num_examples: 18
- name: dev
num_bytes: 1096
num_examples: 5
download_size: 166184960
dataset_size: 160646695
- config_name: world_religions
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 25274
num_examples: 171
- name: validation
num_bytes: 2765
num_examples: 19
- name: dev
num_bytes: 670
num_examples: 5
download_size: 166184960
dataset_size: 160630086
---
# Dataset Card for MMLU
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository**: https://github.com/hendrycks/test
- **Paper**: https://arxiv.org/abs/2009.03300
### Dataset Summary
[Measuring Massive Multitask Language Understanding](https://arxiv.org/pdf/2009.03300) by [Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/), [Collin Burns](http://collinpburns.com), [Steven Basart](https://stevenbas.art), Andy Zou, Mantas Mazeika, [Dawn Song](https://people.eecs.berkeley.edu/~dawnsong/), and [Jacob Steinhardt](https://www.stat.berkeley.edu/~jsteinhardt/) (ICLR 2021).
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability.
A complete list of tasks: ['abstract_algebra', 'anatomy', 'astronomy', 'business_ethics', 'clinical_knowledge', 'college_biology', 'college_chemistry', 'college_computer_science', 'college_mathematics', 'college_medicine', 'college_physics', 'computer_security', 'conceptual_physics', 'econometrics', 'electrical_engineering', 'elementary_mathematics', 'formal_logic', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_computer_science', 'high_school_european_history', 'high_school_geography', 'high_school_government_and_politics', 'high_school_macroeconomics', 'high_school_mathematics', 'high_school_microeconomics', 'high_school_physics', 'high_school_psychology', 'high_school_statistics', 'high_school_us_history', 'high_school_world_history', 'human_aging', 'human_sexuality', 'international_law', 'jurisprudence', 'logical_fallacies', 'machine_learning', 'management', 'marketing', 'medical_genetics', 'miscellaneous', 'moral_disputes', 'moral_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional_accounting', 'professional_law', 'professional_medicine', 'professional_psychology', 'public_relations', 'security_studies', 'sociology', 'us_foreign_policy', 'virology', 'world_religions']
### Supported Tasks and Leaderboards
| Model | Authors | Humanities | Social Science | STEM | Other | Average |
|------------------------------------|----------|:-------:|:-------:|:-------:|:-------:|:-------:|
| [UnifiedQA](https://arxiv.org/abs/2005.00700) | Khashabi et al., 2020 | 45.6 | 56.6 | 40.2 | 54.6 | 48.9
| [GPT-3](https://arxiv.org/abs/2005.14165) (few-shot) | Brown et al., 2020 | 40.8 | 50.4 | 36.7 | 48.8 | 43.9
| [GPT-2](https://arxiv.org/abs/2005.14165) | Radford et al., 2019 | 32.8 | 33.3 | 30.2 | 33.1 | 32.4
| Random Baseline | N/A | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 | 25.0
### Languages
English
## Dataset Structure
### Data Instances
An example from anatomy subtask looks as follows:
```
{
"question": "What is the embryological origin of the hyoid bone?",
"choices": ["The first pharyngeal arch", "The first and second pharyngeal arches", "The second pharyngeal arch", "The second and third pharyngeal arches"],
"answer": "D"
}
```
### Data Fields
- `question`: a string feature
- `choices`: a list of 4 string features
- `answer`: a ClassLabel feature
### Data Splits
- `auxiliary_train`: auxiliary multiple-choice training questions from ARC, MC_TEST, OBQA, RACE, etc.
- `dev`: 5 examples per subtask, meant for few-shot setting
- `test`: there are at least 100 examples per subtask
| | auxiliary_train | dev | val | test |
| ----- | :------: | :-----: | :-----: | :-----: |
| TOTAL | 99842 | 285 | 1531 | 14042
## Dataset Creation
### Curation Rationale
Transformer models have driven this recent progress by pretraining on massive text corpora, including all of Wikipedia, thousands of books, and numerous websites. These models consequently see extensive information about specialized topics, most of which is not assessed by existing NLP benchmarks. To bridge the gap between the wide-ranging knowledge that models see during pretraining and the existing measures of success, we introduce a new benchmark for assessing models across a diverse set of subjects that humans learn.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[MIT License](https://github.com/hendrycks/test/blob/master/LICENSE)
### Citation Information
If you find this useful in your research, please consider citing the test and also the [ETHICS](https://arxiv.org/abs/2008.02275) dataset it draws from:
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
@article{hendrycks2021ethics,
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
### Contributions
Thanks to [@andyzoujm](https://github.com/andyzoujm) for adding this dataset.
|
CodedotAI/code_clippy_github | CodedotAI | "2022-08-05T02:57:36Z" | 11,180 | 16 | [
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:mit",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2107.03374",
"region:us"
] | [
"sequence-modeling"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language: ["code"]
license:
- mit
multilinguality:
- multilingual
pretty_name: code-clippy-github-code
size_categories:
- unknown
source_datasets: []
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# Code Clippy Github Dataset
## Dataset Description
The Code Clippy dataset consists of various public codebases from GitHub in 22 programming languages with 23 extensions totaling about 16 TB of data when uncompressed. The dataset was created from the public GitHub dataset on Google BigQuery.
### How to use it
This dataset is pretty large please use the streaming parameter from the `datasets` library as seen below:
```python
from datasets import load_dataset
ds = load_dataset("CodedotAI/code_clippy_github", streaming=True)
```
## Data Structure
### Data Instances
```python
{
'code_text': " a = mc^2",
'repo_name': 'NotEinstein',
'file_path': 'root/users/einstein.py',
'language': 'Python',
'license': 'isc',
'size': 2
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|code_text|string|string of the source code contained in the code file|
|repo_name|string|name of the GitHub repository|
|file_path|string|path of the code file within the repository |
|language|string|programming language used in the file inferred by the file extension|
|license|string|license of GitHub repository|
|size|int|size of source file in bytes|
### Data Splits
Only a train split is provided in this dataset.
## Languages
The dataset contains 22 programming languages with over 23 extensions:
```python
{
"C": [".c"],
"C#": [".cs"],
"C++": [".cpp"],
"CSS": [".css"],
"Dart" : [".dart"],
"GO": [".go"],
"HTML":[".html"],
"Java": [".java"],
"JavaScript": [".js"],
"Jupyter Notebooks (Python)": [".ipynb"],
"Kotlin" : [".kt"],
"Lisp" : [".lisp"],
"Matlab" : [".m"],
"PHP": [".php"],
"Perl": [".pl"],
"Python": [".py"],
"R" : [".r"],
"Ruby": [".rb"],
"Rust": [".rs"],
"SQL": [".sql"],
"Shell": [".sh"],
"Swift" : [".swift"],
"TypeScript": [".ts"],
}
```
## Licenses
Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
```python
[
'mit',
'apache-2.0',
'gpl-2.0',
'gpl-3.0',
'bsd-3-clause',
'bsd-2-clause',
'unlicense',
'apacheagpl-3.0',
'lgpl-3.0',
'cc0-1.0',
'epl-1.0',
'lgpl-2.1',
'mpl-2.0',
'isc',
'artistic-2.0'
]
```
## Dataset Statistics
The dataset is about ~ 18 TB uncompressed. We are currently working on processing it and applying further filtering.
## Dataset Creation
The dataset was created in two steps:
1. Files with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery using the following query:
```sql
SELECT
f.id, f.repo_name, f.path, content.copies, content.size, content.content, lic.license
FROM
`bigquery-public-data.github_repos.files` AS f
JOIN
`bigquery-public-data.github_repos.contents` as content
ON
f.id = content.id
JOIN
`bigquery-public-data.github_repos.licenses` AS lic
ON
f.repo_name = lic.repo_name
WHERE
NOT content.binary
AND (
(f.path LIKE '%.py') OR (f.path LIKE '%.java') OR (f.path LIKE '%.js')
OR (f.path LIKE '%.html') OR (f.path LIKE '%.lisp') OR (f.path LIKE '%.sh')
OR (f.path LIKE '%.r') OR (f.path LIKE '%.pl') OR (f.path LIKE '%.css')
OR (f.path LIKE '%.sql') OR (f.path LIKE '%.c') OR (f.path LIKE '%.cpp')
OR (f.path LIKE '%.ts') OR (f.path LIKE '%.cs') OR (f.path LIKE '%.go')
OR (f.path LIKE '%.rs') OR (f.path LIKE '%.swift') OR (f.path LIKE '%.php')
OR (f.path LIKE '%.dart') OR (f.path LIKE '%.kt') OR (f.path LIKE '%.m')
OR (f.path LIKE '%.rb') OR (f.path LIKE '%.ipynb')
)
-- make sure we dont go above 1 megabyte
AND (content.size BETWEEN 1024 AND 1000000)
```
2. Currently, our CodedotAI team is working on adding additional filters and cleaning this dataset.
### Personal and Sensitive Information
Since this data was collected from public repositories, there exists potential for personal and sensitive information to be included in the data through developers accidentally or on purpose uploading their secret keys, passwords, API keys, emails, etc.
## Considerations for Using the Data
### Social Impact of Dataset
The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discussion are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
1. **Over-reliance:** A language model trained on large datasets such as this one for the task of autogenerating code may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using a language model trained on this dataset.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, any model trained on this dataset may generate vulnerable, buggy, or malicious code. In safety-critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, a model trained on this dataset may be used to generate malicious code on purpose in order to perform ransomware or other such attacks.
4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there have been little to no previous cases of such usages of licensed publicly available code. Therefore, any model trained on this dataset may be required to obey license terms that align with the software it was trained on such as GPL-3.0, which is why we purposefully put this dataset under the GPL-3.0 license. It is unclear the legal ramifications of using a language model trained on this dataset.
### v1.0
- The query was executed on _February 1, 2022, 12:15:59 AM EST_
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/about/). We would also like to thank [Dr. Razvan Bunescu](https://webpages.charlotte.edu/rbunescu/) and [The College of Computing and Informatics at UNC Charlotte](https://cci.charlotte.edu/) for their generous contributions to this project, specifically in funding the BigQuery and Google Cloud Storage costs. We would also like to thank the [codeparrot team at Hugging face](https://huggingface.co/codeparrot) for open sourcing their documentation on [github-code](https://huggingface.co/datasets/codeparrot/github-code) which we used for the readme in this dataset. For another similar dataset to this please check github-code! |
open-llm-leaderboard-old/details_yhyhy3__med-orca-instruct-33b | open-llm-leaderboard-old | "2023-10-17T22:28:04Z" | 11,168 | 0 | [
"region:us"
] | null | "2023-08-18T11:52:40Z" | ---
pretty_name: Evaluation run of yhyhy3/med-orca-instruct-33b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [yhyhy3/med-orca-instruct-33b](https://huggingface.co/yhyhy3/med-orca-instruct-33b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yhyhy3__med-orca-instruct-33b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T22:27:51.480164](https://huggingface.co/datasets/open-llm-leaderboard/details_yhyhy3__med-orca-instruct-33b/blob/main/results_2023-10-17T22-27-51.480164.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"\
em_stderr\": 0.0,\n \"f1\": 6.606543624161075e-05,\n \"f1_stderr\"\
: 2.6666679153418564e-05,\n \"acc\": 0.2525651144435675,\n \"acc_stderr\"\
: 0.007025872980895256\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n\
\ \"em_stderr\": 0.0,\n \"f1\": 6.606543624161075e-05,\n \"\
f1_stderr\": 2.6666679153418564e-05\n },\n \"harness|gsm8k|5\": {\n \
\ \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.505130228887135,\n \"acc_stderr\": 0.014051745961790513\n\
\ }\n}\n```"
repo_url: https://huggingface.co/yhyhy3/med-orca-instruct-33b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: [email protected]
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|arc:challenge|25_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|arc:challenge|25_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_26T02_39_23.109820
path:
- '**/details_harness|drop|3_2023-09-26T02-39-23.109820.parquet'
- split: 2023_10_17T22_27_51.480164
path:
- '**/details_harness|drop|3_2023-10-17T22-27-51.480164.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T22-27-51.480164.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_26T02_39_23.109820
path:
- '**/details_harness|gsm8k|5_2023-09-26T02-39-23.109820.parquet'
- split: 2023_10_17T22_27_51.480164
path:
- '**/details_harness|gsm8k|5_2023-10-17T22-27-51.480164.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T22-27-51.480164.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hellaswag|10_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hellaswag|10_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T13:49:32.359108.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T09:03:49.045450.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-09T13:49:32.359108.parquet'
- split: 2023_08_18T09_03_49.045450
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-18T09:03:49.045450.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-18T09:03:49.045450.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_26T02_39_23.109820
path:
- '**/details_harness|winogrande|5_2023-09-26T02-39-23.109820.parquet'
- split: 2023_10_17T22_27_51.480164
path:
- '**/details_harness|winogrande|5_2023-10-17T22-27-51.480164.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T22-27-51.480164.parquet'
- config_name: results
data_files:
- split: 2023_08_09T13_49_32.359108
path:
- results_2023-08-09T13:49:32.359108.parquet
- split: 2023_08_18T09_03_49.045450
path:
- results_2023-08-18T09:03:49.045450.parquet
- split: 2023_09_26T02_39_23.109820
path:
- results_2023-09-26T02-39-23.109820.parquet
- split: 2023_10_17T22_27_51.480164
path:
- results_2023-10-17T22-27-51.480164.parquet
- split: latest
path:
- results_2023-10-17T22-27-51.480164.parquet
---
# Dataset Card for Evaluation run of yhyhy3/med-orca-instruct-33b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/yhyhy3/med-orca-instruct-33b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [yhyhy3/med-orca-instruct-33b](https://huggingface.co/yhyhy3/med-orca-instruct-33b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yhyhy3__med-orca-instruct-33b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T22:27:51.480164](https://huggingface.co/datasets/open-llm-leaderboard/details_yhyhy3__med-orca-instruct-33b/blob/main/results_2023-10-17T22-27-51.480164.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 6.606543624161075e-05,
"f1_stderr": 2.6666679153418564e-05,
"acc": 0.2525651144435675,
"acc_stderr": 0.007025872980895256
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 6.606543624161075e-05,
"f1_stderr": 2.6666679153418564e-05
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.505130228887135,
"acc_stderr": 0.014051745961790513
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
klue/klue | klue | "2024-01-04T14:05:57Z" | 11,158 | 73 | [
"task_categories:fill-mask",
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:token-classification",
"task_ids:extractive-qa",
"task_ids:named-entity-recognition",
"task_ids:natural-language-inference",
"task_ids:parsing",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2105.09680",
"region:us",
"relation-extraction"
] | [
"fill-mask",
"question-answering",
"text-classification",
"text-generation",
"token-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- fill-mask
- question-answering
- text-classification
- text-generation
- token-classification
task_ids:
- extractive-qa
- named-entity-recognition
- natural-language-inference
- parsing
- semantic-similarity-scoring
- text-scoring
- topic-classification
paperswithcode_id: klue
pretty_name: KLUE
config_names:
- dp
- mrc
- ner
- nli
- re
- sts
- wos
- ynat
tags:
- relation-extraction
dataset_info:
- config_name: dp
features:
- name: sentence
dtype: string
- name: index
list: int32
- name: word_form
list: string
- name: lemma
list: string
- name: pos
list: string
- name: head
list: int32
- name: deprel
list: string
splits:
- name: train
num_bytes: 7899965
num_examples: 10000
- name: validation
num_bytes: 1557462
num_examples: 2000
download_size: 3742577
dataset_size: 9457427
- config_name: mrc
features:
- name: title
dtype: string
- name: context
dtype: string
- name: news_category
dtype: string
- name: source
dtype: string
- name: guid
dtype: string
- name: is_impossible
dtype: bool
- name: question_type
dtype: int32
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 46505593
num_examples: 17554
- name: validation
num_bytes: 15583017
num_examples: 5841
download_size: 30098472
dataset_size: 62088610
- config_name: ner
features:
- name: sentence
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-DT
'1': I-DT
'2': B-LC
'3': I-LC
'4': B-OG
'5': I-OG
'6': B-PS
'7': I-PS
'8': B-QT
'9': I-QT
'10': B-TI
'11': I-TI
'12': O
splits:
- name: train
num_bytes: 19891905
num_examples: 21008
- name: validation
num_bytes: 4937563
num_examples: 5000
download_size: 5265887
dataset_size: 24829468
- config_name: nli
features:
- name: guid
dtype: string
- name: source
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 5719882
num_examples: 24998
- name: validation
num_bytes: 673260
num_examples: 3000
download_size: 2056116
dataset_size: 6393142
- config_name: re
features:
- name: guid
dtype: string
- name: sentence
dtype: string
- name: subject_entity
struct:
- name: word
dtype: string
- name: start_idx
dtype: int32
- name: end_idx
dtype: int32
- name: type
dtype: string
- name: object_entity
struct:
- name: word
dtype: string
- name: start_idx
dtype: int32
- name: end_idx
dtype: int32
- name: type
dtype: string
- name: label
dtype:
class_label:
names:
'0': no_relation
'1': org:dissolved
'2': org:founded
'3': org:place_of_headquarters
'4': org:alternate_names
'5': org:member_of
'6': org:members
'7': org:political/religious_affiliation
'8': org:product
'9': org:founded_by
'10': org:top_members/employees
'11': org:number_of_employees/members
'12': per:date_of_birth
'13': per:date_of_death
'14': per:place_of_birth
'15': per:place_of_death
'16': per:place_of_residence
'17': per:origin
'18': per:employee_of
'19': per:schools_attended
'20': per:alternate_names
'21': per:parents
'22': per:children
'23': per:siblings
'24': per:spouse
'25': per:other_family
'26': per:colleagues
'27': per:product
'28': per:religion
'29': per:title
- name: source
dtype: string
splits:
- name: train
num_bytes: 11145426
num_examples: 32470
- name: validation
num_bytes: 2559272
num_examples: 7765
download_size: 8190257
dataset_size: 13704698
- config_name: sts
features:
- name: guid
dtype: string
- name: source
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
struct:
- name: label
dtype: float64
- name: real-label
dtype: float64
- name: binary-label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 2832889
num_examples: 11668
- name: validation
num_bytes: 122641
num_examples: 519
download_size: 1587855
dataset_size: 2955530
- config_name: wos
features:
- name: guid
dtype: string
- name: domains
list: string
- name: dialogue
list:
- name: role
dtype: string
- name: text
dtype: string
- name: state
list: string
splits:
- name: train
num_bytes: 26676970
num_examples: 8000
- name: validation
num_bytes: 3488911
num_examples: 1000
download_size: 6358855
dataset_size: 30165881
- config_name: ynat
features:
- name: guid
dtype: string
- name: title
dtype: string
- name: label
dtype:
class_label:
names:
'0': IT과학
'1': 경제
'2': 사회
'3': 생활문화
'4': 세계
'5': 스포츠
'6': 정치
- name: url
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 10109584
num_examples: 45678
- name: validation
num_bytes: 2039181
num_examples: 9107
download_size: 5012303
dataset_size: 12148765
configs:
- config_name: dp
data_files:
- split: train
path: dp/train-*
- split: validation
path: dp/validation-*
- config_name: mrc
data_files:
- split: train
path: mrc/train-*
- split: validation
path: mrc/validation-*
- config_name: ner
data_files:
- split: train
path: ner/train-*
- split: validation
path: ner/validation-*
- config_name: nli
data_files:
- split: train
path: nli/train-*
- split: validation
path: nli/validation-*
- config_name: re
data_files:
- split: train
path: re/train-*
- split: validation
path: re/validation-*
- config_name: sts
data_files:
- split: train
path: sts/train-*
- split: validation
path: sts/validation-*
- config_name: wos
data_files:
- split: train
path: wos/train-*
- split: validation
path: wos/validation-*
- config_name: ynat
data_files:
- split: train
path: ynat/train-*
- split: validation
path: ynat/validation-*
---
# Dataset Card for KLUE
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://klue-benchmark.com/
- **Repository:** https://github.com/KLUE-benchmark/KLUE
- **Paper:** [KLUE: Korean Language Understanding Evaluation](https://arxiv.org/abs/2105.09680)
- **Leaderboard:** [Leaderboard](https://klue-benchmark.com/leaderboard)
- **Point of Contact:** https://github.com/KLUE-benchmark/KLUE/issues
### Dataset Summary
KLUE is a collection of 8 tasks to evaluate natural language understanding capability of Korean language models. We delibrately select the 8 tasks, which are Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking.
### Supported Tasks and Leaderboards
Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking
### Languages
`ko-KR`
## Dataset Structure
### Data Instances
#### ynat
An example of 'train' looks as follows.
```
{'date': '2016.06.30. 오전 10:36',
'guid': 'ynat-v1_train_00000',
'label': 3,
'title': '유튜브 내달 2일까지 크리에이터 지원 공간 운영',
'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008508947'}
```
#### sts
An example of 'train' looks as follows.
```
{'guid': 'klue-sts-v1_train_00000',
'labels': {'label': 3.7, 'real-label': 3.714285714285714, 'binary-label': 1},
'sentence1': '숙소 위치는 찾기 쉽고 일반적인 한국의 반지하 숙소입니다.',
'sentence2': '숙박시설의 위치는 쉽게 찾을 수 있고 한국의 대표적인 반지하 숙박시설입니다.',
'source': 'airbnb-rtt'}
```
#### nli
An example of 'train' looks as follows.
```
{'guid': 'klue-nli-v1_train_00000',
'hypothesis': '힛걸 진심 최고로 멋지다.',
'label': 0,
'premise': '힛걸 진심 최고다 그 어떤 히어로보다 멋지다',
'source': 'NSMC'}
```
#### ner
An example of 'train' looks as follows.
```
{'tokens': ['특', '히', ' ', '영', '동', '고', '속', '도', '로', ' ', '강', '릉', ' ', '방', '향', ' ', '문', '막', '휴', '게', '소', '에', '서', ' ', '만', '종', '분', '기', '점', '까', '지', ' ', '5', '㎞', ' ', '구', '간', '에', '는', ' ', '승', '용', '차', ' ', '전', '용', ' ', '임', '시', ' ', '갓', '길', '차', '로', '제', '를', ' ', '운', '영', '하', '기', '로', ' ', '했', '다', '.'],
'ner_tags': [12, 12, 12, 2, 3, 3, 3, 3, 3, 12, 2, 3, 12, 12, 12, 12, 2, 3, 3, 3, 3, 12, 12, 12, 2, 3, 3, 3, 3, 12, 12, 12, 8, 9, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12],
'sentence': '특히 <영동고속도로:LC> <강릉:LC> 방향 <문막휴게소:LC>에서 <만종분기점:LC>까지 <5㎞:QT> 구간에는 승용차 전용 임시 갓길차로제를 운영하기로 했다.'}
```
#### re
An example of 'train' looks as follows.
```
{'guid': 'klue-re-v1_train_00000',
'label': 0,
'object_entity': {'word': '조지 해리슨',
'start_idx': 13,
'end_idx': 18,
'type': 'PER'},
'sentence': '〈Something〉는 조지 해리슨이 쓰고 비틀즈가 1969년 앨범 《Abbey Road》에 담은 노래다.',
'source': 'wikipedia',
'subject_entity': {'word': '비틀즈',
'start_idx': 24,
'end_idx': 26,
'type': 'ORG'}}
```
#### dp
An example of 'train' looks as follows.
```
{'deprel': ['NP', 'NP_OBJ', 'VP', 'NP', 'NP_SBJ', 'NP', 'NP_MOD', 'NP_CNJ', 'NP_CNJ', 'NP', 'NP', 'NP_OBJ', 'AP', 'VP'],
'head': [2, 3, 14, 5, 14, 7, 10, 10, 10, 11, 12, 14, 14, 0],
'index': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
'lemma': ['해당', '그림 을', '보 면', '디즈니', '공주 들 이', '브리트니', '스피어스 의', '앨범 이나', '뮤직 비디오 ,', '화보', '속', '모습 을', '똑같이', '재연 하 였 다 .'],
'pos': ['NNG', 'NNG+JKO', 'VV+EC', 'NNP', 'NNG+XSN+JKS', 'NNP', 'NNP+JKG', 'NNG+JC', 'NNG+NNG+SP', 'NNG', 'NNG', 'NNG+JKO', 'MAG', 'NNG+XSA+EP+EF+SF'],
'sentence': '해당 그림을 보면 디즈니 공주들이 브리트니 스피어스의 앨범이나 뮤직비디오, 화보 속 모습을 똑같이 재연했다.',
'word_form': ['해당', '그림을', '보면', '디즈니', '공주들이', '브리트니', '스피어스의', '앨범이나', '뮤직비디오,', '화보', '속', '모습을', '똑같이', '재연했다.']}
```
#### mrc
An example of 'train' looks as follows.
```
{'answers': {'answer_start': [478, 478], 'text': ['한 달가량', '한 달']},
'context': '올여름 장마가 17일 제주도에서 시작됐다. 서울 등 중부지방은 예년보다 사나흘 정도 늦은 이달 말께 장마가 시작될 전망이다.17일 기상청에 따르면 제주도 남쪽 먼바다에 있는 장마전선의 영향으로 이날 제주도 산간 및 내륙지역에 호우주의보가 내려지면서 곳곳에 100㎜에 육박하는 많은 비가 내렸다. 제주의 장마는 평년보다 2~3일, 지난해보다는 하루 일찍 시작됐다. 장마는 고온다습한 북태평양 기단과 한랭 습윤한 오호츠크해 기단이 만나 형성되는 장마전선에서 내리는 비를 뜻한다.장마전선은 18일 제주도 먼 남쪽 해상으로 내려갔다가 20일께 다시 북상해 전남 남해안까지 영향을 줄 것으로 보인다. 이에 따라 20~21일 남부지방에도 예년보다 사흘 정도 장마가 일찍 찾아올 전망이다. 그러나 장마전선을 밀어올리는 북태평양 고기압 세력이 약해 서울 등 중부지방은 평년보다 사나흘가량 늦은 이달 말부터 장마가 시작될 것이라는 게 기상청의 설명이다. 장마전선은 이후 한 달가량 한반도 중남부를 오르내리며 곳곳에 비를 뿌릴 전망이다. 최근 30년간 평균치에 따르면 중부지방의 장마 시작일은 6월24~25일이었으며 장마기간은 32일, 강수일수는 17.2일이었다.기상청은 올해 장마기간의 평균 강수량이 350~400㎜로 평년과 비슷하거나 적을 것으로 내다봤다. 브라질 월드컵 한국과 러시아의 경기가 열리는 18일 오전 서울은 대체로 구름이 많이 끼지만 비는 오지 않을 것으로 예상돼 거리 응원에는 지장이 없을 전망이다.',
'guid': 'klue-mrc-v1_train_12759',
'is_impossible': False,
'news_category': '종합',
'question': '북태평양 기단과 오호츠크해 기단이 만나 국내에 머무르는 기간은?',
'question_type': 1,
'source': 'hankyung',
'title': '제주도 장마 시작 … 중부는 이달 말부터'}
```
#### wos
An example of 'train' looks as follows.
```
{'dialogue': [{'role': 'user',
'text': '쇼핑을 하려는데 서울 서쪽에 있을까요?',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽']},
{'role': 'sys',
'text': '서울 서쪽에 쇼핑이 가능한 곳이라면 노량진 수산물 도매시장이 있습니다.',
'state': []},
{'role': 'user',
'text': '오 네 거기 주소 좀 알려주세요.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '노량진 수산물 도매시장의 주소는 서울 동작구 93806입니다.', 'state': []},
{'role': 'user',
'text': '알려주시는김에 연락처랑 평점도 좀 알려주세요.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '그럼. 연락처는 6182006591이고 평점은 4점입니다.', 'state': []},
{'role': 'user',
'text': '와 감사합니다.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '감사합니다.', 'state': []}],
'domains': ['관광'],
'guid': 'wos-v1_train_00001'}
```
### Data Fields
#### ynat
+ `guid`: a `string` feature
+ `title`: a `string` feature
+ `label`: a classification label, with possible values `IT과학`(0), `경제`(1), `사회`(2), `생활문화`(3), `세계`(4), `스포츠`(5), `정치`(6)
+ `url`: a `string` feature
+ `date`: a `string` feature
#### sts
+ `guid`: a `string` feature
+ `source`: a `string` feature
+ `sentence1`: a `string` feature
+ `sentence2`: a `string` feature
+ `labels`: a dictionary feature containing
+ `label`: a `float64` feature
+ `real-label`: a `float64` feature
+ `binary-label`: a classification label, with possible values `negative`(0), `positive`(1)
#### nli
+ `guid`: a `string` feature
+ `source`: a `string` feature
+ `premise`: a `string` feature
+ `hypothesis`: a `string` feature
+ `label`: a classification label, with possible values `entailment`(0), `neutral`(1), `contradiction`(2)
#### ner
+ `sentence`: a `string` feature
+ `tokens`: a list of a `string` feature (tokenization is at character level)
+ `ner_tags`: a list of classification labels, with possible values including `B-DT`(0), `I-DT`(1),
`B-LC`(2), `I-LC`(3), `B-OG`(4), `I-OG`(5), `B-PS`(6), `I-PS`(7), `B-QT`(8), `I-QT`(9), `B-TI`(10),
`I-TI`(11), `O`(12)
#### re
+ `guid`: a `string` feature
+ `sentence`: a `string` feature
+ `subject_entity`: a dictionary feature containing
+ `word`: a `string` feature
+ `start_idx`: a `int32` feature
+ `end_idx`: a `int32` feature
+ `type`: a `string` feature
+ `object_entity`: a dictionary feature containing
+ `word`: a `string` feature
+ `start_idx`: a `int32` feature
+ `end_idx`: a `int32` feature
+ `type`: a `string` feature
+ `label`: a list of labels, with possible values including `no_relation`(0), `org:dissolved`(1),
`org:founded`(2), `org:place_of_headquarters`(3), `org:alternate_names`(4), `org:member_of`(5),
`org:members`(6), `org:political/religious_affiliation`(7), `org:product`(8), `org:founded_by`(9),`org:top_members/employees`(10),
`org:number_of_employees/members`(11), `per:date_of_birth`(12), `per:date_of_death`(13), `per:place_of_birth`(14),
`per:place_of_death`(15), `per:place_of_residence`(16), `per:origin`(17), `per:employee_of`(18),
`per:schools_attended`(19), `per:alternate_names`(20), `per:parents`(21), `per:children`(22),
`per:siblings`(23), `per:spouse`(24), `per:other_family`(25), `per:colleagues`(26), `per:product`(27),
`per:religion`(28), `per:title`(29),
+ `source`: a `string` feature
#### dp
+ `sentence`: a `string` feature
+ `index`: a list of `int32` feature
+ `word_form`: a list of `string` feature
+ `lemma`: a list of `string` feature
+ `pos`: a list of `string` feature
+ `head`: a list of `int32` feature
+ `deprel`: a list of `string` feature
#### mrc
+ `title`: a `string` feature
+ `context`: a `string` feature
+ `news_category`: a `string` feature
+ `source`: a `string` feature
+ `guid`: a `string` feature
+ `is_impossible`: a `bool` feature
+ `question_type`: a `int32` feature
+ `question`: a `string` feature
+ `answers`: a dictionary feature containing
+ `answer_start`: a `int32` feature
+ `text`: a `string` feature
#### wos
+ `guid`: a `string` feature
+ `domains`: a `string` feature
+ `dialogue`: a list of dictionary feature containing
+ `role`: a `string` feature
+ `text`: a `string` feature
+ `state`: a `string` feature
### Data Splits
#### ynat
You can see more details in [here](https://klue-benchmark.com/tasks/66/data/description).
+ train: 45,678
+ validation: 9,107
#### sts
You can see more details in [here](https://klue-benchmark.com/tasks/67/data/description).
+ train: 11,668
+ validation: 519
#### nli
You can see more details in [here](https://klue-benchmark.com/tasks/68/data/description).
+ train: 24,998
+ validation: 3,000
#### ner
You can see more details in [here](https://klue-benchmark.com/tasks/69/overview/description).
+ train: 21,008
+ validation: 5,000
#### re
You can see more details in [here](https://klue-benchmark.com/tasks/70/overview/description).
+ train: 32,470
+ validation: 7,765
#### dp
You can see more details in [here](https://klue-benchmark.com/tasks/71/data/description).
+ train: 10,000
+ validation: 2,000
#### mrc
You can see more details in [here](https://klue-benchmark.com/tasks/72/overview/description).
+ train: 17,554
+ validation: 5,841
#### wos
You can see more details in [here](https://klue-benchmark.com/tasks/73/overview/description).
+ train: 8,000
+ validation: 1,000
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@jungwhank](https://github.com/jungwhank), [@bzantium](https://github.com/bzantium) for adding this dataset. |
MarkJeong/aihub_food | MarkJeong | "2023-03-09T17:13:22Z" | 11,158 | 1 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-03-09T02:39:58Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '01011001'
'1': '01012001'
'2': '01012002'
'3': '01012003'
'4': '01012004'
'5': '01012005'
'6': '01012006'
'7': '01013001'
'8': 01014008
'9': 01014009
'10': '01014010'
'11': '01014011'
'12': '01014012'
'13': '01014013'
'14': '01015002'
'15': '01015003'
'16': '01015012'
'17': '01015013'
'18': '01015014'
'19': '01015015'
'20': '01015016'
'21': '01015017'
'22': 01015018
'23': 01015019
'24': '01016001'
'25': '01016002'
'26': '01016003'
'27': '01016004'
'28': '01016005'
'29': '01016006'
'30': '01016007'
'31': 01016008
'32': '02011006'
'33': '02011007'
'34': 02011008
'35': 02011009
'36': '02011010'
'37': '02011011'
'38': '02011012'
'39': '02011013'
'40': '02011014'
'41': '02011015'
'42': '02011016'
'43': '02011017'
'44': 02011018
'45': 02011019
'46': '02011020'
'47': '02011021'
'48': '02011023'
'49': '02011024'
'50': '02011025'
'51': '02011027'
'52': 02011028
'53': 02011029
'54': '02011030'
'55': '02011031'
'56': '02011032'
'57': '02011033'
'58': '02011034'
'59': '02011035'
'60': '02011036'
'61': '02011037'
'62': 02011038
'63': 02011039
'64': '02011040'
'65': '02012001'
'66': '02012002'
'67': '02012003'
'68': '02012004'
'69': '02012005'
'70': '03011001'
'71': '03011002'
'72': '03011003'
'73': '03011004'
'74': '03011005'
'75': '03011006'
'76': '03011007'
'77': 03011008
'78': 03011009
'79': '03011010'
'80': '03011011'
'81': '03012001'
'82': '03012002'
'83': '04011001'
'84': '04011002'
'85': '04011003'
'86': '04011004'
'87': '04011005'
'88': '04011006'
'89': '04011007'
'90': 04011008
'91': '04011010'
'92': '04011011'
'93': '04011012'
'94': '04011013'
'95': '04011014'
'96': '04011015'
'97': '04011016'
'98': '04012001'
'99': '04012002'
'100': '04012003'
'101': '04012004'
'102': '04012005'
'103': '04012006'
'104': '04012007'
'105': 04012008
'106': 04012009
'107': '04012010'
'108': '04012011'
'109': '04012012'
'110': '04012013'
'111': '04013002'
'112': '04013003'
'113': '04013004'
'114': '04013005'
'115': '04013006'
'116': '04013007'
'117': 04013008
'118': 04013009
'119': '04013010'
'120': '04013011'
'121': '04013012'
'122': '04013013'
'123': '04013014'
'124': '04013015'
'125': '04013017'
'126': 04013018
'127': 04013019
'128': '04015003'
'129': '04016001'
'130': '04017001'
'131': '04017002'
'132': 04018001
'133': 04018002
'134': 04018003
'135': 04018004
'136': 04019001
'137': 04019002
'138': 04019003
'139': 04019004
'140': 04019005
'141': 04019006
'142': 04019007
'143': 04019008
'144': '05011001'
'145': '05011002'
'146': '05011004'
'147': 05011008
'148': '05011010'
'149': '05011011'
'150': '05011012'
'151': '05012001'
'152': '05012002'
'153': '05012003'
'154': '05012004'
'155': '05012005'
'156': '05013001'
'157': '06012001'
'158': '06012002'
'159': '06012003'
'160': '06012011'
'161': '07011003'
'162': '07011004'
'163': '07012001'
'164': '07012002'
'165': '07012003'
'166': '07013001'
'167': '07013002'
'168': '07013003'
'169': '07013004'
'170': '07013005'
'171': '07013006'
'172': '07013007'
'173': 07013008
'174': 07013009
'175': '07013010'
'176': '07013011'
'177': 08011004
'178': 08011005
'179': 08011006
'180': 08011007
'181': 08011008
'182': 08012001
'183': 08012002
'184': 08012003
'185': 08012004
'186': 08012005
'187': 08012006
'188': 08012007
'189': 08012008
'190': 08012009
'191': 08012010
'192': 08013001
'193': 08013002
'194': 08013003
'195': 08013004
'196': 08013005
'197': 08013006
'198': 08014001
'199': 08014002
'200': 08014003
'201': 09012001
'202': 09012002
'203': 09013001
'204': 09013002
'205': 09014001
'206': 09014002
'207': 09014003
'208': 09014004
'209': 09015001
'210': 09015002
'211': 09015003
'212': 09016001
'213': '10011001'
'214': '10011002'
'215': '10011003'
'216': '10011004'
'217': '11011001'
'218': '11011002'
'219': '11011003'
'220': '11011004'
'221': '11011005'
'222': '11011006'
'223': '11011007'
'224': '11011008'
'225': '11011009'
'226': '11011010'
'227': '11011011'
'228': '11012001'
'229': '11012002'
'230': '11012003'
'231': '11012004'
'232': '11013001'
'233': '11013002'
'234': '11013003'
'235': '11013004'
'236': '11013005'
'237': '11013006'
'238': '11013007'
'239': '11013009'
'240': '11013010'
'241': '11013011'
'242': '11013012'
'243': '11014001'
'244': '11014002'
'245': '11014003'
'246': '11014004'
'247': '11014005'
'248': '11014006'
'249': '11014007'
'250': '11014008'
'251': '11014009'
'252': '11014010'
'253': '11015001'
'254': '11015002'
'255': '12011001'
'256': '12011002'
'257': '12011003'
'258': '12011004'
'259': '12011005'
'260': '12011006'
'261': '12011007'
'262': '12011008'
'263': '12011009'
'264': '12011010'
'265': '12011011'
'266': '12011012'
'267': '12011013'
'268': '12011014'
'269': '12011015'
'270': '13011001'
'271': '13011002'
'272': '13011003'
'273': '13011011'
'274': '13011012'
'275': '13012001'
'276': '13012002'
'277': '14011001'
'278': '14011002'
'279': '14011004'
'280': '14011005'
'281': '14012001'
'282': '14012002'
'283': '15011001'
'284': '15011002'
'285': '15011003'
'286': '15011004'
'287': '15011005'
'288': '15011006'
'289': '15011007'
'290': '15011008'
'291': '15011009'
'292': '15011010'
'293': '15011011'
'294': '15011012'
'295': '15011013'
'296': '15011014'
'297': '15011015'
'298': '15011016'
'299': '15011017'
'300': '16011001'
'301': '16011002'
'302': '16011003'
'303': '16011004'
'304': '16011005'
'305': '16011006'
splits:
- name: train
num_bytes: 14812723538.728
num_examples: 486839
- name: test
num_bytes: 33069619665.134
num_examples: 21178
- name: validation
num_bytes: 33770989851.48
num_examples: 21180
download_size: 82692432131
dataset_size: 81653333055.342
---
# Dataset Card for "aihub_food"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KShivendu/dbpedia-entities-openai-1M | KShivendu | "2024-02-19T08:24:43Z" | 11,129 | 20 | [
"task_categories:feature-extraction",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"feature-extraction"
] | "2023-06-20T22:29:43Z" | ---
license: mit
dataset_info:
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: openai
sequence: float32
splits:
- name: train
num_bytes: 12383152
num_examples: 1000000
download_size: 12383152
dataset_size: 1000000
language:
- en
task_categories:
- feature-extraction
pretty_name: OpenAI 1M with DBPedia Entities
size_categories:
- 1M<n<10M
---
1M OpenAI Embeddings -- 1536 dimensions
Created: June 2023.
Text used for Embedding: title (string) + text (string)
Embedding Model: text-embedding-ada-002
First used for the pgvector vs VectorDB (Qdrant) benchmark: https://nirantk.com/writing/pgvector-vs-qdrant/
### Future work
We are planning to take this up to 10M (and possibly 100M) vectors. Contact [@KShivendu_](https://twitter.com/KShivendu_) on Twitter or mail to [email protected] if you want to help :)
### Credits:
This dataset was generated from the first 1M entries of https://huggingface.co/datasets/BeIR/dbpedia-entity |
HuggingFaceTB/smollm-corpus | HuggingFaceTB | "2024-09-06T07:04:57Z" | 11,124 | 315 | [
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-15T13:51:48Z" | ---
license: odc-by
dataset_info:
- config_name: cosmopedia-v2
features:
- name: prompt
dtype: string
- name: text
dtype: string
- name: token_length
dtype: int64
- name: audience
dtype: string
- name: format
dtype: string
- name: seed_data
dtype: string
splits:
- name: train
num_bytes: 212503640747
num_examples: 39134000
download_size: 122361137711
dataset_size: 212503640747
- config_name: fineweb-edu-dedup
features:
- name: text
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 957570164451
num_examples: 190168005
download_size: 550069279849
dataset_size: 957570164451
- config_name: python-edu
features:
- name: blob_id
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: length_bytes
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 989334135
num_examples: 7678448
download_size: 643903049
dataset_size: 989334135
configs:
- config_name: cosmopedia-v2
data_files:
- split: train
path: cosmopedia-v2/train-*
- config_name: fineweb-edu-dedup
data_files:
- split: train
path: fineweb-edu-dedup/train-*
- config_name: python-edu
data_files:
- split: train
path: python-edu/train-*
language:
- en
---
# SmolLM-Corpus
This dataset is a curated collection of high-quality educational and synthetic data designed for training small language models.
You can find more details about the models trained on this dataset in our [SmolLM blog post](https://huggingface.co/blog/smollm).
# Dataset subsets
## Cosmopedia v2
Cosmopedia v2 is an enhanced version of Cosmopedia, the largest synthetic dataset for pre-training, consisting of over 39 million textbooks, blog posts, and stories generated by [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
Most of the samples are generated by prompting the model to generate content on specific topics using a web page referred to as a "seed sample," as shown in Figure 1. We use web samples to increase diversity and expand the range of prompts.
You can find more details in this [blog post](https://huggingface.co/blog/smollm).
### Dataset Features
* `prompt (string)`: The input prompt used to generate the text.
* `text (string)`: The generated text content.
* `token_length (int64)`: The length of the text in tokens (Mistral-7B tokenizer).
* `audience (string)`: The intended audience for the content.
* `format (string)`: The format of the content (e.g., textbook, story).
* `seed_data (string)`: The seed sample used to generate the text.
### Loading the dataset
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceTB/smollm-corpus", "cosmopedia-v2", split="train", num_proc=16)
print(ds[0])
```
## Python-Edu
The `python-edu` subset consists of Python files that were scored 4 or more by the [educational code model](https://huggingface.co/HuggingFaceTB/python-edu-scorer).
The files were extracted from the [`stack-v2-train`](https://huggingface.co/datasets/bigcode/the-stack-v2-train-full-ids) dataset.
### Dataset Features
* `blob_id (string)`: Software Heritage (SWH) ID of the file on AWS S3.
* `repo_name (string)`: Repository name on GitHub.
* `path (string)`: The file path within the repository.
* `length_bytes (int64)`: Length of the file content in UTF-8 bytes.
* `score (float32)`: The output of the educational scoring model.
* `int_score (uint8)`: The rounded educational score.
### Downloading the data
The file contents are downloaded from Software Heritage's S3 bucket to ensure data compliance.
Please refer to [the-stack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train-full-ids) for the data license.
When running on a 16-core AWS `us-east-1` instance, this script takes ~6 hours to download the files:
```python
import boto3
import gzip
from datasets import load_dataset
from botocore.exceptions import ClientError
num_proc = 16
s3 = boto3.client('s3')
bucket_name = "softwareheritage"
def download_contents(blob_id):
key = f"content/{blob_id}"
try:
obj = s3.get_object(Bucket=bucket_name, Key=key)
with gzip.GzipFile(fileobj=obj['Body']) as fin:
content = fin.read().decode("utf-8", errors="ignore")
return {"text": content, "download_success": True}
except ClientError as e:
if e.response['Error']['Code'] == 'NoSuchKey':
print(f"File not found: {key}")
return {"text": "", "download_success": False}
else:
raise
ds = load_dataset("HuggingFaceTB/smollm-corpus", "python-edu", split="train", num_proc=num_proc)
ds = ds.map(download_contents, input_columns="blob_id", num_proc=num_proc)
# Filter out failed downloads
ds = ds.filter(lambda x: x['download_success'])
# Optionally, print the first example to verify the data
print(ds[0])
```
## FineWeb-Edu (deduplicated)
FineWeb-Edu-Dedup is a deduplicated subset of the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) dataset, containing 220 billion tokens of educational web pages.
The source dataset was filtered using an educational quality classifier to retain only the highest quality educational content.
For more information refer to the [FineWeb-v1 blog post](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1)
### Dataset Features
* `text (string)`: The web page's text content.
* `id (string)`: Unique ID of the web page.
* `metadata (struct)`: Metadata about the web page, including:
* `dump (string)`: The source CommonCrawl dump.
* `url (string)`: The URL of the web page.
* `date (timestamp[s])`: The date the web page was captured.
* `file_path (string)`: The file path of the commoncrawl snapshot.
* `language (string)`: The language of the web page.
* `language_score (float64)`: The language probability.
* `token_count (int64)`: The token count of the web page (gpt2 tokenizer).
* `score (float64)`: The educational quality score.
* `int_score (int64)`: The rounded educational quality score.
### Loading the dataset
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceTB/smollm-corpus", "fineweb-edu-dedup", split="train", num_proc=16)
print(ds[0])
```
## Citation
```
@software{benallal2024smollmcorpus,
author = {Ben Allal, Loubna and Lozhkov, Anton and Penedo, Guilherme and Wolf, Thomas and von Werra, Leandro},
title = {SmolLM-Corpus},
month = July,
year = 2024,
url = {https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus}
}
``` |
pppppppppp2/planeperturbed | pppppppppp2 | "2023-10-13T11:12:52Z" | 11,110 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-06-08T19:52:28Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 647755473.5
num_examples: 5500
download_size: 622143522
dataset_size: 647755473.5
---
# Dataset Card for "planeperturbed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tiange/Cap3D | tiange | "2025-03-21T22:43:21Z" | 11,106 | 104 | [
"task_categories:text-to-3d",
"task_categories:image-to-3d",
"license:odc-by",
"arxiv:2306.07279",
"arxiv:2404.07984",
"arxiv:2212.08051",
"arxiv:2307.05663",
"arxiv:2110.06199",
"arxiv:1512.03012",
"region:us"
] | [
"text-to-3d",
"image-to-3d"
] | "2023-05-28T18:31:58Z" | ---
license: odc-by
viewer: false
task_categories:
- text-to-3d
- image-to-3d
---
## Dataset Description
- **Paper:** [Scalable 3D Captioning with Pretrained Models](https://arxiv.org/abs/2306.07279)
- **Paper:** [View Selection for 3D Captioning via Diffusion Ranking](https://arxiv.org/abs/2404.07984)
- **Repository**: [Github_Cap3D](https://github.com/crockwell/Cap3D)
- **Repository**: [Github_DiffuRank](https://github.com/tiangeluo/DiffuRank)
- **Project**: [Project](https://cap3d-um.github.io/)
This repository hosts data for [Scalable 3D Captioning with Pretrained Models](https://cap3d-um.github.io/) and [View Selection for 3D Captioning via Diffusion Ranking](http://arxiv.org/abs/2404.07984), including descriptive **captions** for 3D objects in [Objaverse](https://arxiv.org/abs/2212.08051), [Objaverse-XL](https://arxiv.org/pdf/2307.05663.pdf), [ABO](https://arxiv.org/abs/2110.06199), and [ShapeNet](https://arxiv.org/abs/1512.03012). This repo also includes **point clouds** and **rendered images with camera, depth, and MatAlpha information** of Objaverse objects, as well as their Shap-E latent codes. All the captions and data provided by our papers are released under ODC-By 1.0 license.
## Very important license & data remove information
Please ensure compliance with the licenses specified for each object in the Objaverse annotations. Note that certain objects are not approved for commercial use.
If you are the creator of an asset and would like your 3D model’s information removed from the Cap3D-DiffuRank dataset, please contact [Tiange](mailto:[email protected]) for assistance. We sincerely thank all contributors—your efforts are instrumental in advancing the 3D vision community. This dataset repository is a humble addition, built upon the foundation of your contributions and shared work.
## Usage
Please download and unzip files from [**Page**](https://huggingface.co/datasets/tiange/Cap3D/tree/main) according to your usage. Below is a table listing fiels descriptions, followed by example Python scripts for data loading.
| Filename | Description |
| -------------------------------------- | ------------------------------------------------------------ |
| **Cap3D_automated_Objaverse_full.csv** | By integrating text descriptions initially generated by [**Cap3D**](https://arxiv.org/abs/2306.07279) and refined by [**DiffuRank**](https://arxiv.org/abs/2404.07984), we produced **1,816,350** 3D-caption pairs for Objaverse objects. <br>- **785,150** for [**Objaverse**](https://arxiv.org/abs/2212.08051); <br>- the remainder for [**Objaverse-XL**](https://arxiv.org/pdf/2307.05663.pdf), primarily from the high-quality subset described in **Section 4.1 (Alignment Finetuning)** of the [Objaverse-XL paper](https://proceedings.neurips.cc/paper_files/paper/2023/file/70364304877b5e767de4e9a2a511be0c-Paper-Datasets_and_Benchmarks.pdf), retrieved via `alignment_annotations = oxl.get_alignment_annotations()`; <br>- identifiers of length **32 characters** are Objaverse 1.0 **UIDs** (`import objaverse; uids = objaverse.load_uids()`), while those with **64 characters** are **SHA256 hashes** from Objaverse-XL. |
| Cap3D_automated_**ABO**.csv | Our captions generated by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984) for the [ABO dataset](https://arxiv.org/abs/2110.06199), including both general and compositional descriptions. |
| Cap3D_automated_**ShapeNet**.csv | Our captions generated by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984) for the [ShapeNet dataset](https://arxiv.org/abs/1512.03012). |
| **PointCloud_zips** | Provided by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984), **1,006,782** PointClouds (16,384 colorful points) extracted from Objaverse objects. Saved as `.ply` file. `compressed_pcs_{00~09}.zip` are for Objaverse objects and `compressed_pcs_{>=10}.zip` for Objaverse-XL objects. |
| PointCloud_zips_**ABO** | Provided by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984), **7,953** PointClouds (16,384 colorful points) extracted from ABO objects. Saved as `.ply` file. |
| PointCloud_zips_**ShapeNet** | Provided by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984), **52,472** PointClouds (16,384 colorful points) extracted from ShapeNet objects. Saved as `.ply` file. |
| **RenderedImage_perobj_zips** | Provided by [DiffuRank](https://arxiv.org/abs/2404.07984), Rendered images for **1,006,782** Objaverse objects. Once unzip `compressed_imgs_perobj_xx.zip` will have multiple zip files which consists of **20** rendered images along with camera details (intrinsic & extrinsic), depth data, and masks ([one example](https://huggingface.co/datasets/tiange/Cap3D/tree/main/RenderedImage_perobj_zips/example_zipfile)). Please specify the unzip path, such as `unzip ed51a51909ee46c780db3a85e821feb2.zip -d ed51a51909ee46c780db3a85e821feb2`. `compressed_imgs_perobj_{00~52}.zip` are for Objaverse objects and `compressed_imgs_perobj_{>=53}.zip` for Objaverse-XL objects. **More information are in [here](https://huggingface.co/datasets/tiange/Cap3D/blob/main/RenderedImage_perobj_zips/README.md).** |
| RenderedImage_perobj_zips_**ABO** | Provided by [DiffuRank](https://arxiv.org/abs/2404.07984), Rendered images for **7,953** ABO objects. Details similar to the above. |
| RenderedImage_perobj_zips_**ShapeNet** | Provided by [DiffuRank](https://arxiv.org/abs/2404.07984), Rendered images for **52,472** ShapeNet objects. Similar to the above but with 8 rendered images. |
| misc | Including miscellaneous files such as human-authored captions, finetuned models, objaverse pointclouds stored as .pt, shapE latent codes, and etc. Please refer to this [README](https://huggingface.co/datasets/tiange/Cap3D/blob/main/misc/README.md) |
``` python
# load our captions
import pandas as pd
captions = pd.read_csv('Cap3D_automated_Objaverse_full.csv', header=None)
## captions:
## 0 1
## 0 ed51a51909ee46c780db3a85e821feb2 Matte green rifle with a long barrel, stock, a...
## 1 9110b606f6c547b2980fcb3c8c4b6a1c Rustic single-story building with a weathered ...
## 2 80d9caaa1fa04502af666135196456e1 a pair of purple and black swords with white h...
## 3 28d43a218cd8466a8c1f82b29b71e314 3D model of a cluttered outdoor scene with veg...
## 4 75582285fab442a2ba31733f9c8fae66 Floating terrain piece with grassy landscape a...
## ... ... ...
## 1002417 3623e74f34c1c3c523af6b2bb8ffcbe2d2dce897ef61b9... Abstract 3D composition with human figures and...
## 1002418 64e9f7b7a1fc4c4ec56ed8b5917dfd610930043ac5e15f... 3D object with a rough, irregular pink surface...
## 1002419 fcd089d6a237fee21dfd5f0d6d9b74b2fd1150cdc61c7f... Bright pink abstract 3D model of a building wi...
## 1002420 f812dc980050f2d5f4b37df2a8620372f810dd6456a5f2... Monochromatic gray 3D model of a stylized huma...
## 1002421 77c09500b4d8e4b881e1ce6929d56c23658b87173c0996... Modular futuristic spacecraft with red and ora...
## if u want to obtain the caption for specific UID
caption = captions[captions[0] == '80d9caaa1fa04502af666135196456e1'][1].values[0]
# load point clouds (unzip https://huggingface.co/datasets/tiange/Cap3D/tree/main/PointCloud_pt_zips)
import torch
pts = torch.load('Cap3D_pcs_pt/80d9caaa1fa04502af666135196456e1.pt')
## pts.shape == torch.Size([6, 16384])
```
## Citation Information
<details>
<summary>Please cite Objaverse, ABO, and ShapeNet paper accordingly, if you use related data. </summary>
```
@inproceedings{deitke2023objaverse,
title={Objaverse: A universe of annotated 3d objects},
author={Deitke, Matt and Schwenk, Dustin and Salvador, Jordi and Weihs, Luca and Michel, Oscar and VanderBilt, Eli and Schmidt, Ludwig and Ehsani, Kiana and Kembhavi, Aniruddha and Farhadi, Ali},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13142--13153},
year={2023}
}
@article{deitke2024objaverse,
title={Objaverse-xl: A universe of 10m+ 3d objects},
author={Deitke, Matt and Liu, Ruoshi and Wallingford, Matthew and Ngo, Huong and Michel, Oscar and Kusupati, Aditya and Fan, Alan and Laforte, Christian and Voleti, Vikram and Gadre, Samir Yitzhak and others},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
}
@inproceedings{collins2022abo,
title={Abo: Dataset and benchmarks for real-world 3d object understanding},
author={Collins, Jasmine and Goel, Shubham and Deng, Kenan and Luthra, Achleshwar and Xu, Leon and Gundogdu, Erhan and Zhang, Xi and Vicente, Tomas F Yago and Dideriksen, Thomas and Arora, Himanshu and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21126--21136},
year={2022}
}
@article{chang2015shapenet,
title={Shapenet: An information-rich 3d model repository},
author={Chang, Angel X and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and others},
journal={arXiv preprint arXiv:1512.03012},
year={2015}
}
```
</details>
If you find our data or code useful, please consider citing:
```bibtex
@article{luo2023scalable,
title={Scalable 3D Captioning with Pretrained Models},
author={Luo, Tiange and Rockwell, Chris and Lee, Honglak and Johnson, Justin},
journal={arXiv preprint arXiv:2306.07279},
year={2023}
}
@article{luo2024view,
title={View Selection for 3D Captioning via Diffusion Ranking},
author={Luo, Tiange and Johnson, Justin and Lee, Honglak},
journal={arXiv preprint arXiv:2404.07984},
year={2024}
}
```
|
bit0/x_dataset_12 | bit0 | "2025-03-26T00:06:44Z" | 11,106 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-01-23T08:21:19Z" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** bit0/x_dataset_12
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5Dvth5w7eXuZNmQUXn7tn5Hr5tgUeYHYqftPHSkJbt16Daqq
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{bit02025datauniversex_dataset_12,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={bit0},
year={2025},
url={https://huggingface.co/datasets/bit0/x_dataset_12},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 508543658
- **Date Range:** 2025-01-12T00:00:00Z to 2025-03-19T00:00:00Z
- **Last Updated:** 2025-03-26T00:06:43Z
### Data Distribution
- Tweets with hashtags: 0.00%
- Tweets without hashtags: 100.00%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 508543658 | 100.00% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T01:35:08Z | 218850 | 218850 |
| 2025-01-27T02:07:31Z | 226831 | 445681 |
| 2025-01-27T03:07:31Z | 224919 | 670600 |
| 2025-01-27T04:07:29Z | 206544 | 877144 |
| 2025-01-27T05:07:27Z | 192521 | 1069665 |
| 2025-01-27T06:07:28Z | 195281 | 1264946 |
| 2025-01-27T07:07:31Z | 201371 | 1466317 |
| 2025-01-27T08:07:29Z | 218640 | 1684957 |
| 2025-01-27T09:07:33Z | 237412 | 1922369 |
| 2025-01-27T10:07:34Z | 245574 | 2167943 |
| 2025-01-27T11:07:35Z | 263340 | 2431283 |
| 2025-01-27T12:07:37Z | 286394 | 2717677 |
| 2025-01-27T13:07:38Z | 302893 | 3020570 |
| 2025-01-27T14:07:43Z | 309028 | 3329598 |
| 2025-01-27T15:07:41Z | 305393 | 3634991 |
| 2025-01-27T16:07:39Z | 297399 | 3932390 |
| 2025-01-27T17:07:40Z | 280906 | 4213296 |
| 2025-01-27T18:07:35Z | 257898 | 4471194 |
| 2025-01-27T19:07:37Z | 285004 | 4756198 |
| 2025-01-27T20:07:37Z | 273457 | 5029655 |
| 2025-01-27T21:07:34Z | 257777 | 5287432 |
| 2025-01-27T22:07:30Z | 216721 | 5504153 |
| 2025-01-27T23:07:32Z | 224776 | 5728929 |
| 2025-01-28T00:07:35Z | 234338 | 5963267 |
| 2025-01-28T01:07:33Z | 232653 | 6195920 |
| 2025-01-28T02:07:33Z | 234256 | 6430176 |
| 2025-01-28T03:07:35Z | 250492 | 6680668 |
| 2025-01-28T04:07:35Z | 236093 | 6916761 |
| 2025-01-28T05:07:33Z | 207700 | 7124461 |
| 2025-01-28T06:07:35Z | 222655 | 7347116 |
| 2025-01-28T07:07:37Z | 252145 | 7599261 |
| 2025-01-28T08:07:35Z | 251687 | 7850948 |
| 2025-01-28T09:07:38Z | 269138 | 8120086 |
| 2025-01-28T10:07:46Z | 286119 | 8406205 |
| 2025-01-28T11:07:47Z | 320438 | 8726643 |
| 2025-01-28T12:07:57Z | 415958 | 9142601 |
| 2025-01-28T13:07:50Z | 380518 | 9523119 |
| 2025-01-28T14:07:54Z | 366668 | 9889787 |
| 2025-01-28T15:07:49Z | 346973 | 10236760 |
| 2025-01-28T16:07:42Z | 300370 | 10537130 |
| 2025-01-28T17:07:40Z | 280207 | 10817337 |
| 2025-01-28T18:07:40Z | 260183 | 11077520 |
| 2025-01-28T19:07:39Z | 250737 | 11328257 |
| 2025-01-28T20:07:41Z | 241828 | 11570085 |
| 2025-01-28T21:07:38Z | 247788 | 11817873 |
| 2025-01-28T22:07:42Z | 257844 | 12075717 |
| 2025-01-28T23:07:39Z | 255402 | 12331119 |
| 2025-01-29T00:07:39Z | 241459 | 12572578 |
| 2025-01-29T01:07:40Z | 266312 | 12838890 |
| 2025-01-29T02:07:44Z | 288357 | 13127247 |
| 2025-01-29T03:07:44Z | 298915 | 13426162 |
| 2025-01-29T04:07:40Z | 247961 | 13674123 |
| 2025-01-29T05:07:36Z | 218011 | 13892134 |
| 2025-01-29T06:07:39Z | 219915 | 14112049 |
| 2025-01-29T07:07:39Z | 231124 | 14343173 |
| 2025-01-29T08:07:41Z | 256642 | 14599815 |
| 2025-01-29T09:07:44Z | 299274 | 14899089 |
| 2025-01-29T10:07:55Z | 331518 | 15230607 |
| 2025-01-29T11:07:53Z | 363627 | 15594234 |
| 2025-01-29T12:07:57Z | 403168 | 15997402 |
| 2025-01-29T13:07:59Z | 417519 | 16414921 |
| 2025-01-29T14:08:01Z | 406575 | 16821496 |
| 2025-01-29T15:07:59Z | 386030 | 17207526 |
| 2025-01-29T16:07:50Z | 336405 | 17543931 |
| 2025-01-29T17:07:45Z | 308792 | 17852723 |
| 2025-01-29T18:07:47Z | 287284 | 18140007 |
| 2025-01-29T19:07:58Z | 282168 | 18422175 |
| 2025-01-29T20:07:56Z | 299463 | 18721638 |
| 2025-01-29T21:07:49Z | 315694 | 19037332 |
| 2025-01-29T22:07:47Z | 295974 | 19333306 |
| 2025-01-29T23:07:46Z | 279817 | 19613123 |
| 2025-01-30T00:07:46Z | 272179 | 19885302 |
| 2025-01-30T02:12:03Z | 298659 | 20183961 |
| 2025-01-30T03:08:37Z | 320987 | 20504948 |
| 2025-01-30T04:07:47Z | 256708 | 20761656 |
| 2025-01-30T06:15:27Z | 231558 | 20993214 |
| 2025-01-30T07:07:45Z | 242181 | 21235395 |
| 2025-01-30T08:07:49Z | 278307 | 21513702 |
| 2025-01-30T09:07:53Z | 325600 | 21839302 |
| 2025-01-30T10:07:57Z | 350754 | 22190056 |
| 2025-01-30T11:08:00Z | 388434 | 22578490 |
| 2025-01-30T12:08:05Z | 429146 | 23007636 |
| 2025-01-30T13:08:09Z | 444487 | 23452123 |
| 2025-01-30T14:08:13Z | 442123 | 23894246 |
| 2025-01-30T15:08:07Z | 426613 | 24320859 |
| 2025-01-30T16:08:00Z | 367970 | 24688829 |
| 2025-01-30T17:07:54Z | 350907 | 25039736 |
| 2025-01-30T18:07:56Z | 335383 | 25375119 |
| 2025-01-30T19:08:01Z | 329010 | 25704129 |
| 2025-01-30T20:08:00Z | 357588 | 26061717 |
| 2025-01-30T21:07:58Z | 355122 | 26416839 |
| 2025-01-30T22:07:57Z | 336850 | 26753689 |
| 2025-01-30T23:07:56Z | 313904 | 27067593 |
| 2025-01-31T00:07:53Z | 301269 | 27368862 |
| 2025-01-31T01:07:56Z | 312218 | 27681080 |
| 2025-01-31T02:07:57Z | 320280 | 28001360 |
| 2025-01-31T03:07:58Z | 357646 | 28359006 |
| 2025-01-31T04:07:52Z | 284685 | 28643691 |
| 2025-01-31T05:07:53Z | 257225 | 28900916 |
| 2025-01-31T06:07:51Z | 263323 | 29164239 |
| 2025-01-31T07:07:51Z | 274071 | 29438310 |
| 2025-01-31T09:10:35Z | 364546 | 29802856 |
| 2025-01-31T10:08:05Z | 394162 | 30197018 |
| 2025-01-31T11:08:10Z | 441922 | 30638940 |
| 2025-01-31T12:08:16Z | 479358 | 31118298 |
| 2025-01-31T13:08:24Z | 572691 | 31690989 |
| 2025-01-31T14:08:21Z | 527359 | 32218348 |
| 2025-01-31T15:08:17Z | 496741 | 32715089 |
| 2025-01-31T16:08:16Z | 435876 | 33150965 |
| 2025-01-31T17:08:06Z | 395952 | 33546917 |
| 2025-01-31T18:08:04Z | 381873 | 33928790 |
| 2025-01-31T19:08:05Z | 354953 | 34283743 |
| 2025-01-31T20:08:04Z | 354559 | 34638302 |
| 2025-01-31T21:08:08Z | 409148 | 35047450 |
| 2025-01-31T22:08:13Z | 411450 | 35458900 |
| 2025-01-31T23:08:07Z | 385851 | 35844751 |
| 2025-02-01T00:08:04Z | 351469 | 36196220 |
| 2025-02-01T01:08:04Z | 351621 | 36547841 |
| 2025-02-01T02:08:07Z | 363893 | 36911734 |
| 2025-02-01T03:08:11Z | 413780 | 37325514 |
| 2025-02-01T04:08:01Z | 326925 | 37652439 |
| 2025-02-01T05:07:57Z | 296926 | 37949365 |
| 2025-02-01T06:08:05Z | 298546 | 38247911 |
| 2025-02-01T07:07:59Z | 308830 | 38556741 |
| 2025-02-01T08:08:07Z | 363353 | 38920094 |
| 2025-02-01T09:08:15Z | 435801 | 39355895 |
| 2025-02-01T10:08:15Z | 456645 | 39812540 |
| 2025-02-01T11:08:23Z | 497955 | 40310495 |
| 2025-02-01T12:08:26Z | 541057 | 40851552 |
| 2025-02-01T13:08:34Z | 564057 | 41415609 |
| 2025-02-01T14:08:31Z | 566274 | 41981883 |
| 2025-02-01T15:08:27Z | 543607 | 42525490 |
| 2025-02-01T16:08:21Z | 464817 | 42990307 |
| 2025-02-01T17:08:14Z | 424890 | 43415197 |
| 2025-02-01T18:08:12Z | 391021 | 43806218 |
| 2025-02-01T19:08:11Z | 375440 | 44181658 |
| 2025-02-01T20:08:09Z | 360561 | 44542219 |
| 2025-02-01T21:08:07Z | 362713 | 44904932 |
| 2025-02-01T22:08:13Z | 367056 | 45271988 |
| 2025-02-01T23:08:13Z | 399005 | 45670993 |
| 2025-02-02T00:08:15Z | 399651 | 46070644 |
| 2025-02-02T01:08:14Z | 414756 | 46485400 |
| 2025-02-02T02:08:19Z | 434498 | 46919898 |
| 2025-02-02T03:08:24Z | 443775 | 47363673 |
| 2025-02-02T04:08:16Z | 401808 | 47765481 |
| 2025-02-02T05:08:12Z | 375225 | 48140706 |
| 2025-02-02T06:08:13Z | 370862 | 48511568 |
| 2025-02-02T07:08:13Z | 377693 | 48889261 |
| 2025-02-02T08:08:14Z | 403451 | 49292712 |
| 2025-02-02T09:08:20Z | 470071 | 49762783 |
| 2025-02-02T10:08:25Z | 485245 | 50248028 |
| 2025-02-02T11:08:30Z | 526662 | 50774690 |
| 2025-02-02T12:08:33Z | 570050 | 51344740 |
| 2025-02-02T13:08:37Z | 590534 | 51935274 |
| 2025-02-02T14:08:36Z | 597635 | 52532909 |
| 2025-02-02T15:08:35Z | 565447 | 53098356 |
| 2025-02-02T16:08:33Z | 544912 | 53643268 |
| 2025-02-02T17:08:25Z | 479512 | 54122780 |
| 2025-02-02T18:08:24Z | 449875 | 54572655 |
| 2025-02-02T19:08:19Z | 443346 | 55016001 |
| 2025-02-02T20:08:18Z | 411496 | 55427497 |
| 2025-02-02T21:08:15Z | 426350 | 55853847 |
| 2025-02-02T22:08:25Z | 437389 | 56291236 |
| 2025-02-02T23:08:24Z | 486345 | 56777581 |
| 2025-02-03T00:08:20Z | 475541 | 57253122 |
| 2025-02-03T01:08:25Z | 473938 | 57727060 |
| 2025-02-03T02:08:31Z | 556252 | 58283312 |
| 2025-02-03T03:08:35Z | 646000 | 58929312 |
| 2025-02-03T05:08:26Z | 500341 | 59429653 |
| 2025-02-03T06:08:25Z | 505308 | 59934961 |
| 2025-02-03T07:08:24Z | 489386 | 60424347 |
| 2025-02-03T08:08:24Z | 482909 | 60907256 |
| 2025-02-03T09:08:25Z | 533234 | 61440490 |
| 2025-02-03T10:08:33Z | 559694 | 62000184 |
| 2025-02-03T11:08:37Z | 616519 | 62616703 |
| 2025-02-03T12:08:38Z | 669725 | 63286428 |
| 2025-02-03T13:08:57Z | 704452 | 63990880 |
| 2025-02-03T14:08:53Z | 736990 | 64727870 |
| 2025-02-03T15:09:03Z | 758339 | 65486209 |
| 2025-02-03T16:08:51Z | 644437 | 66130646 |
| 2025-02-03T17:08:33Z | 579511 | 66710157 |
| 2025-02-03T18:08:38Z | 532414 | 67242571 |
| 2025-02-03T19:08:38Z | 484265 | 67726836 |
| 2025-02-03T20:08:33Z | 482760 | 68209596 |
| 2025-02-03T21:08:33Z | 491318 | 68700914 |
| 2025-02-03T22:08:41Z | 564560 | 69265474 |
| 2025-02-03T23:08:43Z | 566362 | 69831836 |
| 2025-02-04T00:08:36Z | 508721 | 70340557 |
| 2025-02-04T01:08:39Z | 491392 | 70831949 |
| 2025-02-04T02:08:48Z | 601097 | 71433046 |
| 2025-02-04T03:08:46Z | 584260 | 72017306 |
| 2025-02-04T04:08:31Z | 458509 | 72475815 |
| 2025-02-04T05:08:24Z | 408867 | 72884682 |
| 2025-02-04T06:08:23Z | 419954 | 73304636 |
| 2025-02-04T07:08:24Z | 434826 | 73739462 |
| 2025-02-04T08:08:30Z | 495293 | 74234755 |
| 2025-02-04T09:08:41Z | 600626 | 74835381 |
| 2025-02-04T10:08:44Z | 651884 | 75487265 |
| 2025-02-04T11:08:52Z | 758058 | 76245323 |
| 2025-02-04T12:09:14Z | 798886 | 77044209 |
| 2025-02-04T13:09:11Z | 823995 | 77868204 |
| 2025-02-04T14:09:27Z | 816446 | 78684650 |
| 2025-02-04T15:09:12Z | 776672 | 79461322 |
| 2025-02-04T16:09:00Z | 699108 | 80160430 |
| 2025-02-04T17:08:55Z | 751998 | 80912428 |
| 2025-02-04T18:08:44Z | 633596 | 81546024 |
| 2025-02-04T19:08:52Z | 579609 | 82125633 |
| 2025-02-04T20:08:44Z | 605174 | 82730807 |
| 2025-02-04T21:08:46Z | 618912 | 83349719 |
| 2025-02-04T22:08:46Z | 628897 | 83978616 |
| 2025-02-04T23:08:45Z | 610806 | 84589422 |
| 2025-02-05T00:08:42Z | 587450 | 85176872 |
| 2025-02-05T01:08:45Z | 618951 | 85795823 |
| 2025-02-05T02:08:48Z | 647202 | 86443025 |
| 2025-02-05T03:08:54Z | 718314 | 87161339 |
| 2025-02-05T04:08:43Z | 580951 | 87742290 |
| 2025-02-05T05:08:40Z | 493178 | 88235468 |
| 2025-02-05T06:08:37Z | 495415 | 88730883 |
| 2025-02-05T07:08:38Z | 509538 | 89240421 |
| 2025-02-05T08:08:50Z | 579949 | 89820370 |
| 2025-02-05T09:08:54Z | 670206 | 90490576 |
| 2025-02-05T10:09:14Z | 742394 | 91232970 |
| 2025-02-05T11:09:10Z | 839404 | 92072374 |
| 2025-02-05T12:09:23Z | 915036 | 92987410 |
| 2025-02-05T13:09:39Z | 963172 | 93950582 |
| 2025-02-05T14:09:36Z | 950408 | 94900990 |
| 2025-02-05T15:09:33Z | 913278 | 95814268 |
| 2025-02-05T16:09:21Z | 808441 | 96622709 |
| 2025-02-05T17:09:00Z | 748842 | 97371551 |
| 2025-02-05T18:08:56Z | 683670 | 98055221 |
| 2025-02-05T19:08:56Z | 662736 | 98717957 |
| 2025-02-05T20:09:04Z | 710725 | 99428682 |
| 2025-02-05T21:09:08Z | 784648 | 100213330 |
| 2025-02-05T22:09:14Z | 809270 | 101022600 |
| 2025-02-05T23:09:04Z | 723175 | 101745775 |
| 2025-02-06T00:09:00Z | 678653 | 102424428 |
| 2025-02-06T01:08:59Z | 683097 | 103107525 |
| 2025-02-06T02:09:05Z | 690163 | 103797688 |
| 2025-02-06T03:09:09Z | 791985 | 104589673 |
| 2025-02-06T04:08:57Z | 642023 | 105231696 |
| 2025-02-06T05:08:51Z | 577155 | 105808851 |
| 2025-02-06T06:08:53Z | 578709 | 106387560 |
| 2025-02-06T07:08:55Z | 587000 | 106974560 |
| 2025-02-06T08:09:02Z | 671850 | 107646410 |
| 2025-02-06T09:09:14Z | 801535 | 108447945 |
| 2025-02-06T10:09:20Z | 873663 | 109321608 |
| 2025-02-06T11:10:48Z | 973316 | 110294924 |
| 2025-02-06T12:09:42Z | 1058349 | 111353273 |
| 2025-02-06T13:10:05Z | 1152051 | 112505324 |
| 2025-02-06T14:10:06Z | 1120779 | 113626103 |
| 2025-02-06T15:10:04Z | 1070987 | 114697090 |
| 2025-02-06T16:09:33Z | 945552 | 115642642 |
| 2025-02-06T17:09:25Z | 897721 | 116540363 |
| 2025-02-06T18:09:22Z | 830067 | 117370430 |
| 2025-02-06T19:09:23Z | 787021 | 118157451 |
| 2025-02-06T20:09:26Z | 839810 | 118997261 |
| 2025-02-06T21:09:35Z | 939451 | 119936712 |
| 2025-02-06T22:09:35Z | 901278 | 120837990 |
| 2025-02-06T23:09:26Z | 841908 | 121679898 |
| 2025-02-07T00:09:24Z | 799728 | 122479626 |
| 2025-02-07T01:09:20Z | 813764 | 123293390 |
| 2025-02-07T02:09:27Z | 841291 | 124134681 |
| 2025-02-07T03:09:48Z | 1009671 | 125144352 |
| 2025-02-07T04:09:26Z | 796079 | 125940431 |
| 2025-02-07T05:09:23Z | 719990 | 126660421 |
| 2025-02-07T06:09:20Z | 718662 | 127379083 |
| 2025-02-07T07:09:21Z | 729766 | 128108849 |
| 2025-02-07T08:09:27Z | 815309 | 128924158 |
| 2025-02-07T09:09:38Z | 989900 | 129914058 |
| 2025-02-07T10:09:56Z | 1101573 | 131015631 |
| 2025-02-07T11:10:02Z | 1195608 | 132211239 |
| 2025-02-07T12:10:13Z | 1289038 | 133500277 |
| 2025-02-07T13:10:48Z | 1507083 | 135007360 |
| 2025-02-07T14:10:48Z | 1486003 | 136493363 |
| 2025-02-07T15:10:36Z | 1338560 | 137831923 |
| 2025-02-07T16:10:08Z | 1180172 | 139012095 |
| 2025-02-07T17:09:57Z | 1072748 | 140084843 |
| 2025-02-07T18:09:56Z | 1019875 | 141104718 |
| 2025-02-07T19:09:55Z | 1013296 | 142118014 |
| 2025-02-07T20:09:56Z | 1039787 | 143157801 |
| 2025-02-07T21:10:05Z | 1099742 | 144257543 |
| 2025-02-07T22:10:10Z | 1117334 | 145374877 |
| 2025-02-07T23:10:01Z | 1120534 | 146495411 |
| 2025-02-08T00:09:58Z | 1077674 | 147573085 |
| 2025-02-08T01:10:07Z | 1106404 | 148679489 |
| 2025-02-08T02:10:10Z | 1095800 | 149775289 |
| 2025-02-08T03:10:18Z | 1264929 | 151040218 |
| 2025-02-08T04:10:01Z | 1046777 | 152086995 |
| 2025-02-08T05:09:56Z | 975774 | 153062769 |
| 2025-02-08T06:09:58Z | 951207 | 154013976 |
| 2025-02-08T07:09:56Z | 954618 | 154968594 |
| 2025-02-08T08:10:02Z | 1076654 | 156045248 |
| 2025-02-08T09:10:23Z | 1290055 | 157335303 |
| 2025-02-08T10:10:33Z | 1390494 | 158725797 |
| 2025-02-08T11:10:52Z | 1565805 | 160291602 |
| 2025-02-08T12:10:59Z | 1747975 | 162039577 |
| 2025-02-08T13:11:16Z | 1700062 | 163739639 |
| 2025-02-08T14:11:12Z | 1687625 | 165427264 |
| 2025-02-08T15:11:20Z | 1617049 | 167044313 |
| 2025-02-08T16:10:39Z | 1396606 | 168440919 |
| 2025-02-08T17:10:25Z | 1249741 | 169690660 |
| 2025-02-08T18:10:15Z | 1129528 | 170820188 |
| 2025-02-08T19:10:18Z | 1047986 | 171868174 |
| 2025-02-08T20:10:14Z | 1015860 | 172884034 |
| 2025-02-08T21:10:15Z | 1030266 | 173914300 |
| 2025-02-08T22:10:18Z | 1065283 | 174979583 |
| 2025-02-08T23:10:13Z | 1108784 | 176088367 |
| 2025-02-09T00:10:19Z | 1130510 | 177218877 |
| 2025-02-09T01:10:26Z | 1187721 | 178406598 |
| 2025-02-09T02:10:31Z | 1247724 | 179654322 |
| 2025-02-09T03:10:42Z | 1276674 | 180930996 |
| 2025-02-09T04:10:28Z | 1163136 | 182094132 |
| 2025-02-09T05:10:29Z | 1095446 | 183189578 |
| 2025-02-09T06:10:27Z | 1085375 | 184274953 |
| 2025-02-09T07:10:25Z | 1067968 | 185342921 |
| 2025-02-09T08:10:39Z | 1142678 | 186485599 |
| 2025-02-09T09:10:35Z | 1256453 | 187742052 |
| 2025-02-09T11:11:28Z | 1504611 | 189246663 |
| 2025-02-09T12:11:15Z | 1668430 | 190915093 |
| 2025-02-09T13:11:26Z | 1725762 | 192640855 |
| 2025-02-09T14:11:32Z | 1757106 | 194397961 |
| 2025-02-09T15:11:28Z | 1765062 | 196163023 |
| 2025-02-09T16:11:12Z | 1603214 | 197766237 |
| 2025-02-09T17:11:02Z | 1437092 | 199203329 |
| 2025-02-09T18:11:04Z | 1322348 | 200525677 |
| 2025-02-09T19:11:34Z | 1211786 | 201737463 |
| 2025-02-09T20:10:47Z | 1115284 | 202852747 |
| 2025-02-09T21:10:57Z | 1122677 | 203975424 |
| 2025-02-09T22:10:45Z | 1105983 | 205081407 |
| 2025-02-09T23:10:56Z | 1195837 | 206277244 |
| 2025-02-10T00:10:55Z | 1249476 | 207526720 |
| 2025-02-10T01:11:15Z | 1265064 | 208791784 |
| 2025-02-10T02:11:17Z | 1286985 | 210078769 |
| 2025-02-10T03:11:20Z | 1322859 | 211401628 |
| 2025-02-10T04:11:05Z | 1254522 | 212656150 |
| 2025-02-10T05:11:10Z | 1178711 | 213834861 |
| 2025-02-10T06:11:04Z | 1196822 | 215031683 |
| 2025-02-10T07:11:12Z | 1189227 | 216220910 |
| 2025-02-10T08:11:12Z | 1266601 | 217487511 |
| 2025-02-10T09:11:25Z | 1340224 | 218827735 |
| 2025-02-10T10:11:30Z | 1448770 | 220276505 |
| 2025-02-10T11:11:56Z | 1648736 | 221925241 |
| 2025-02-10T12:12:02Z | 1754839 | 223680080 |
| 2025-02-10T13:12:19Z | 1862242 | 225542322 |
| 2025-02-10T14:12:13Z | 1826646 | 227368968 |
| 2025-02-10T15:12:14Z | 1817817 | 229186785 |
| 2025-02-10T16:11:46Z | 1556088 | 230742873 |
| 2025-02-10T17:11:33Z | 1435936 | 232178809 |
| 2025-02-10T18:11:25Z | 1325672 | 233504481 |
| 2025-02-10T19:11:52Z | 1277078 | 234781559 |
| 2025-02-10T20:11:37Z | 1439564 | 236221123 |
| 2025-02-10T21:12:06Z | 1365508 | 237586631 |
| 2025-02-10T22:11:32Z | 1416644 | 239003275 |
| 2025-02-10T23:11:54Z | 1467828 | 240471103 |
| 2025-02-11T00:11:32Z | 1362716 | 241833819 |
| 2025-02-11T01:11:38Z | 1336049 | 243169868 |
| 2025-02-11T02:11:55Z | 1547764 | 244717632 |
| 2025-02-11T03:12:10Z | 1547959 | 246265591 |
| 2025-02-11T04:11:30Z | 1200857 | 247466448 |
| 2025-02-11T05:11:23Z | 1101825 | 248568273 |
| 2025-02-11T06:11:18Z | 1126122 | 249694395 |
| 2025-02-11T07:11:37Z | 1149702 | 250844097 |
| 2025-02-11T08:11:41Z | 1271665 | 252115762 |
| 2025-02-11T09:11:56Z | 1463085 | 253578847 |
| 2025-02-11T10:12:08Z | 1596539 | 255175386 |
| 2025-02-11T11:12:36Z | 1854637 | 257030023 |
| 2025-02-11T12:12:46Z | 2029808 | 259059831 |
| 2025-02-11T13:13:00Z | 2042750 | 261102581 |
| 2025-02-11T14:12:54Z | 2017608 | 263120189 |
| 2025-02-11T15:12:43Z | 1923288 | 265043477 |
| 2025-02-11T16:12:28Z | 1743637 | 266787114 |
| 2025-02-11T17:12:07Z | 1565511 | 268352625 |
| 2025-02-11T18:11:58Z | 1474712 | 269827337 |
| 2025-02-11T19:13:01Z | 1382117 | 271209454 |
| 2025-02-11T20:11:51Z | 1345413 | 272554867 |
| 2025-02-11T21:11:54Z | 1378746 | 273933613 |
| 2025-02-11T22:11:53Z | 1410203 | 275343816 |
| 2025-02-11T23:12:11Z | 1392308 | 276736124 |
| 2025-02-12T00:28:40Z | 1323063 | 278059187 |
| 2025-02-12T01:11:58Z | 1326496 | 279385683 |
| 2025-02-12T02:12:03Z | 1334384 | 280720067 |
| 2025-02-12T03:12:19Z | 1503096 | 282223163 |
| 2025-02-12T04:11:50Z | 1244233 | 283467396 |
| 2025-02-12T05:11:52Z | 1138428 | 284605824 |
| 2025-02-12T06:11:40Z | 1148642 | 285754466 |
| 2025-02-12T07:12:28Z | 1168314 | 286922780 |
| 2025-02-12T08:12:05Z | 1287163 | 288209943 |
| 2025-02-12T09:12:43Z | 1479028 | 289688971 |
| 2025-02-12T10:12:33Z | 1620411 | 291309382 |
| 2025-02-12T11:12:56Z | 1765288 | 293074670 |
| 2025-02-12T12:13:20Z | 1923118 | 294997788 |
| 2025-02-12T13:13:45Z | 2031364 | 297029152 |
| 2025-02-12T14:12:47Z | 1994282 | 299023434 |
| 2025-02-12T15:13:18Z | 1940129 | 300963563 |
| 2025-02-12T16:12:32Z | 1729631 | 302693194 |
| 2025-02-12T17:13:30Z | 1583185 | 304276379 |
| 2025-02-12T18:17:01Z | 1471613 | 305747992 |
| 2025-02-12T19:22:00Z | 1406612 | 307154604 |
| 2025-02-12T20:11:57Z | 1383907 | 308538511 |
| 2025-02-12T21:16:50Z | 1409997 | 309948508 |
| 2025-02-12T23:13:54Z | 1435906 | 311384414 |
| 2025-02-13T00:12:42Z | 1347405 | 312731819 |
| 2025-02-13T01:13:26Z | 1374328 | 314106147 |
| 2025-02-13T02:28:02Z | 1406240 | 315512387 |
| 2025-02-13T02:30:10Z | 1406240 | 316918627 |
| 2025-02-13T03:17:05Z | 1521105 | 318439732 |
| 2025-02-13T04:12:29Z | 1281972 | 319721704 |
| 2025-02-13T05:13:14Z | 1180565 | 320902269 |
| 2025-02-13T06:22:16Z | 1164777 | 322067046 |
| 2025-02-13T07:11:57Z | 1176686 | 323243732 |
| 2025-02-13T08:12:01Z | 1295623 | 324539355 |
| 2025-02-13T09:19:47Z | 1503123 | 326042478 |
| 2025-02-13T10:13:53Z | 1620805 | 327663283 |
| 2025-02-13T11:20:59Z | 1835292 | 329498575 |
| 2025-02-13T12:24:30Z | 1993063 | 331491638 |
| 2025-02-13T13:13:30Z | 2015219 | 333506857 |
| 2025-02-13T14:13:28Z | 1992892 | 335499749 |
| 2025-02-13T16:13:15Z | 1684412 | 337184161 |
| 2025-02-13T17:12:16Z | 1565969 | 338750130 |
| 2025-02-13T18:12:12Z | 1503850 | 340253980 |
| 2025-02-13T19:12:16Z | 1445429 | 341699409 |
| 2025-02-13T20:12:24Z | 1567145 | 343266554 |
| 2025-02-13T21:12:24Z | 1619611 | 344886165 |
| 2025-02-13T22:12:47Z | 1558853 | 346445018 |
| 2025-02-13T23:12:35Z | 1443554 | 347888572 |
| 2025-02-14T00:12:22Z | 1376817 | 349265389 |
| 2025-02-14T01:12:15Z | 1366757 | 350632146 |
| 2025-02-14T02:12:14Z | 1390455 | 352022601 |
| 2025-02-14T03:12:31Z | 1509252 | 353531853 |
| 2025-02-14T04:12:10Z | 1230554 | 354762407 |
| 2025-02-14T05:12:00Z | 1125572 | 355887979 |
| 2025-02-14T06:11:55Z | 1134428 | 357022407 |
| 2025-02-14T07:12:05Z | 1137433 | 358159840 |
| 2025-02-14T08:12:15Z | 1238021 | 359397861 |
| 2025-02-14T09:12:21Z | 1409165 | 360807026 |
| 2025-02-14T10:12:42Z | 1561142 | 362368168 |
| 2025-02-14T11:12:50Z | 1707762 | 364075930 |
| 2025-02-14T12:13:08Z | 1833835 | 365909765 |
| 2025-02-14T13:13:09Z | 1916429 | 367826194 |
| 2025-02-14T14:13:06Z | 1893665 | 369719859 |
| 2025-02-14T15:13:08Z | 1836382 | 371556241 |
| 2025-02-14T16:12:48Z | 1627091 | 373183332 |
| 2025-02-14T17:12:34Z | 1483530 | 374666862 |
| 2025-02-14T18:12:29Z | 1390903 | 376057765 |
| 2025-02-14T19:12:25Z | 1354998 | 377412763 |
| 2025-02-14T20:12:30Z | 1357721 | 378770484 |
| 2025-02-14T21:12:42Z | 1460807 | 380231291 |
| 2025-02-14T22:12:46Z | 1513022 | 381744313 |
| 2025-02-14T23:12:40Z | 1451102 | 383195415 |
| 2025-02-15T00:12:29Z | 1353197 | 384548612 |
| 2025-02-15T01:12:25Z | 1358425 | 385907037 |
| 2025-02-15T02:12:33Z | 1350900 | 387257937 |
| 2025-02-15T03:12:58Z | 1508490 | 388766427 |
| 2025-02-15T04:12:31Z | 1268677 | 390035104 |
| 2025-02-15T05:12:18Z | 1192181 | 391227285 |
| 2025-02-15T06:12:22Z | 1164833 | 392392118 |
| 2025-02-15T07:12:28Z | 1159457 | 393551575 |
| 2025-02-15T08:12:30Z | 1276302 | 394827877 |
| 2025-02-15T09:12:43Z | 1029495 | 395857372 |
| 2025-02-18T02:10:43Z | 1353168 | 397210540 |
| 2025-02-18T02:12:29Z | 1353168 | 398563708 |
| 2025-02-18T03:07:10Z | 1518653 | 400082361 |
| 2025-02-18T08:06:21Z | 1204713 | 401287074 |
| 2025-02-18T16:07:17Z | 1807731 | 403094805 |
| 2025-02-19T00:06:32Z | 1353917 | 404448722 |
| 2025-02-19T08:06:16Z | 1105583 | 405554305 |
| 2025-02-19T16:06:42Z | 1253744 | 406808049 |
| 2025-02-20T00:06:15Z | 1041928 | 407849977 |
| 2025-02-20T08:06:49Z | 1004773 | 408854750 |
| 2025-02-20T16:07:05Z | 1209201 | 410063951 |
| 2025-02-21T00:06:43Z | 967182 | 411031133 |
| 2025-02-21T08:06:37Z | 943379 | 411974512 |
| 2025-02-21T16:07:45Z | 1173049 | 413147561 |
| 2025-02-22T00:07:30Z | 995419 | 414142980 |
| 2025-02-22T08:07:18Z | 1019833 | 415162813 |
| 2025-02-22T16:07:22Z | 1202116 | 416364929 |
| 2025-02-23T08:07:25Z | 960028 | 417324957 |
| 2025-02-23T16:07:01Z | 1065157 | 418390114 |
| 2025-02-24T00:06:56Z | 976914 | 419367028 |
| 2025-02-24T08:07:02Z | 1071136 | 420438164 |
| 2025-02-24T16:07:34Z | 1468396 | 421906560 |
| 2025-02-25T00:07:38Z | 1547219 | 423453779 |
| 2025-02-25T08:07:11Z | 1159176 | 424612955 |
| 2025-02-25T16:07:33Z | 1436269 | 426049224 |
| 2025-02-26T00:07:20Z | 1253941 | 427303165 |
| 2025-02-26T08:07:17Z | 1217873 | 428521038 |
| 2025-02-26T16:07:36Z | 1415503 | 429936541 |
| 2025-02-27T00:07:19Z | 1198606 | 431135147 |
| 2025-02-27T08:07:15Z | 1185787 | 432320934 |
| 2025-02-27T16:07:45Z | 1410102 | 433731036 |
| 2025-02-28T00:07:24Z | 1244844 | 434975880 |
| 2025-02-28T08:07:20Z | 1172250 | 436148130 |
| 2025-02-28T16:07:53Z | 1500507 | 437648637 |
| 2025-03-01T00:07:31Z | 1270009 | 438918646 |
| 2025-03-01T08:07:31Z | 1311188 | 440229834 |
| 2025-03-01T16:08:15Z | 1595456 | 441825290 |
| 2025-03-02T00:07:34Z | 1279555 | 443104845 |
| 2025-03-02T08:07:34Z | 1254123 | 444358968 |
| 2025-03-02T16:08:12Z | 1584373 | 445943341 |
| 2025-03-03T00:07:42Z | 1321599 | 447264940 |
| 2025-03-03T08:07:30Z | 1204593 | 448469533 |
| 2025-03-03T16:08:02Z | 1494861 | 449964394 |
| 2025-03-04T00:07:32Z | 1157859 | 451122253 |
| 2025-03-04T08:07:15Z | 1094499 | 452216752 |
| 2025-03-04T16:08:15Z | 1401191 | 453617943 |
| 2025-03-05T00:07:33Z | 1181375 | 454799318 |
| 2025-03-05T08:07:23Z | 1151091 | 455950409 |
| 2025-03-05T16:08:04Z | 1450326 | 457400735 |
| 2025-03-06T00:07:46Z | 1255061 | 458655796 |
| 2025-03-06T08:06:54Z | 1191214 | 459847010 |
| 2025-03-06T16:07:30Z | 1488170 | 461335180 |
| 2025-03-07T00:06:55Z | 1234787 | 462569967 |
| 2025-03-07T08:07:04Z | 1244911 | 463814878 |
| 2025-03-07T16:07:24Z | 1497660 | 465312538 |
| 2025-03-08T00:07:07Z | 1324584 | 466637122 |
| 2025-03-08T05:21:48Z | 1199319 | 467836441 |
| 2025-03-08T08:06:57Z | 1247799 | 469084240 |
| 2025-03-08T16:07:26Z | 1547089 | 470631329 |
| 2025-03-09T00:07:16Z | 1308816 | 471940145 |
| 2025-03-09T02:16:14Z | 197431 | 472137576 |
| 2025-03-09T08:05:24Z | 189497 | 472327073 |
| 2025-03-09T16:05:36Z | 253688 | 472580761 |
| 2025-03-10T00:05:34Z | 245736 | 472826497 |
| 2025-03-10T08:05:28Z | 223211 | 473049708 |
| 2025-03-10T16:06:06Z | 414047 | 473463755 |
| 2025-03-11T00:06:40Z | 701384 | 474165139 |
| 2025-03-11T08:06:02Z | 596598 | 474761737 |
| 2025-03-11T16:06:36Z | 975300 | 475737037 |
| 2025-03-12T00:06:41Z | 984107 | 476721144 |
| 2025-03-12T08:06:47Z | 934228 | 477655372 |
| 2025-03-12T16:07:27Z | 1229004 | 478884376 |
| 2025-03-13T00:06:46Z | 1032803 | 479917179 |
| 2025-03-13T08:20:37Z | 957581 | 480874760 |
| 2025-03-13T16:07:09Z | 1282646 | 482157406 |
| 2025-03-14T00:07:03Z | 1124285 | 483281691 |
| 2025-03-14T08:06:51Z | 1011939 | 484293630 |
| 2025-03-14T16:07:11Z | 1270724 | 485564354 |
| 2025-03-15T00:06:54Z | 1069045 | 486633399 |
| 2025-03-15T08:06:48Z | 986198 | 487619597 |
| 2025-03-15T16:06:57Z | 1135196 | 488754793 |
| 2025-03-16T00:06:36Z | 866912 | 489621705 |
| 2025-03-16T08:06:40Z | 787188 | 490408893 |
| 2025-03-16T16:06:50Z | 1000236 | 491409129 |
| 2025-03-17T00:06:26Z | 761833 | 492170962 |
| 2025-03-17T08:06:25Z | 659065 | 492830027 |
| 2025-03-17T16:06:01Z | 460001 | 493290028 |
| 2025-03-18T00:06:43Z | 835419 | 494125447 |
| 2025-03-18T08:06:34Z | 730689 | 494856136 |
| 2025-03-18T16:06:56Z | 957885 | 495814021 |
| 2025-03-19T00:06:58Z | 788880 | 496602901 |
| 2025-03-19T08:06:26Z | 624268 | 497227169 |
| 2025-03-19T16:06:52Z | 782298 | 498009467 |
| 2025-03-20T00:06:25Z | 641541 | 498651008 |
| 2025-03-20T08:06:21Z | 469733 | 499120741 |
| 2025-03-20T16:06:27Z | 603871 | 499724612 |
| 2025-03-21T00:06:22Z | 530167 | 500254779 |
| 2025-03-21T08:06:18Z | 455317 | 500710096 |
| 2025-03-21T16:06:26Z | 603845 | 501313941 |
| 2025-03-22T00:06:15Z | 529181 | 501843122 |
| 2025-03-22T08:06:15Z | 468351 | 502311473 |
| 2025-03-22T16:06:33Z | 600185 | 502911658 |
| 2025-03-23T00:06:18Z | 502834 | 503414492 |
| 2025-03-23T08:06:15Z | 455285 | 503869777 |
| 2025-03-23T16:06:22Z | 573312 | 504443089 |
| 2025-03-24T00:06:14Z | 462112 | 504905201 |
| 2025-03-24T08:06:09Z | 434216 | 505339417 |
| 2025-03-24T16:06:25Z | 570896 | 505910313 |
| 2025-03-25T00:06:20Z | 513332 | 506423645 |
| 2025-03-25T08:06:35Z | 579228 | 507002873 |
| 2025-03-25T16:06:50Z | 800864 | 507803737 |
| 2025-03-26T00:06:43Z | 739921 | 508543658 |
|
LHF/escorpius-mr | LHF | "2023-05-11T22:29:21Z" | 11,102 | 5 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:ar",
"language:bn",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:gl",
"language:hi",
"language:hr",
"language:it",
"language:ja",
"language:ko",
"language:mt",
"language:nl",
"language:no",
"language:oc",
"language:pa",
"language:pl",
"language:pt",
"language:ro",
"language:sl",
"language:sr",
"language:sv",
"language:tr",
"language:uk",
"language:ur",
"license:cc-by-nc-nd-4.0",
"size_categories:1B<n<10B",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2206.15147",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2022-05-03T18:49:47Z" | ---
license: cc-by-nc-nd-4.0
language:
- af
- ar
- bn
- ca
- cs
- da
- de
- el
- eu
- fa
- fi
- fr
- gl
- hi
- hr
- it
- ja
- ko
- mt
- nl
- no
- oc
- pa
- pl
- pt
- ro
- sl
- sr
- sv
- tr
- uk
- ur
multilinguality:
- multilingual
size_categories:
- 100B<n<1T
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# esCorpius Multilingual Raw
In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.
# Usage
```
dataset = load_dataset('LHF/escorpius-m', split='train', streaming=True)
```
# Intended use
This corpus is the *raw version* of the esCorpius-m corpus. This corpus can be used for benchmarking deduplication tools.
## Other corpora
- esCorpius multilingual corpus (deduplicated): https://huggingface.co/datasets/LHF/escorpius-m
- esCorpius original *Spanish-only* corpus (deduplicated): https://huggingface.co/datasets/LHF/escorpius
## Citation
Link to paper: https://www.isca-speech.org/archive/pdfs/iberspeech_2022/gutierrezfandino22_iberspeech.pdf / https://arxiv.org/abs/2206.15147
Cite this work:
```
@inproceedings{gutierrezfandino22_iberspeech,
author={Asier Gutiérrez-Fandiño and David Pérez-Fernández and Jordi Armengol-Estapé and David Griol and Zoraida Callejas},
title={{esCorpius: A Massive Spanish Crawling Corpus}},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
year=2022,
booktitle={Proc. IberSPEECH 2022},
pages={126--130},
doi={10.21437/IberSPEECH.2022-26}
}
```
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus.
|
ashraf-ali/quran-data | ashraf-ali | "2022-12-10T17:35:33Z" | 11,042 | 18 | [
"task_categories:automatic-speech-recognition",
"language_creators:Tarteel.io",
"license:cc0-1.0",
"size_categories:1K<n<10K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"automatic-speech-recognition"
] | "2022-11-28T17:14:02Z" | ---
language_creators:
- Tarteel.io
license:
- cc0-1.0
size_categories:
ar:
- 43652
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: quran-data
pretty_name: Quran Audio
language_bcp47:
- ar
---
# Dataset Card for Quran audio
Content
* 7 Imam Full Quran Recitation: 7*6236 wav file
- csv contains the Text info for 11k subset short wav file
* Tarteel.io user dataset ~25k wav
- csv contains the Text info for 18k subset of the accepted user quality |
open-llm-leaderboard-old/details_EleutherAI__polyglot-ko-12.8b | open-llm-leaderboard-old | "2023-10-19T02:18:08Z" | 11,034 | 0 | [
"region:us"
] | null | "2023-08-17T23:47:23Z" | ---
pretty_name: Evaluation run of EleutherAI/polyglot-ko-12.8b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_EleutherAI__polyglot-ko-12.8b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-19T02:17:54.630291](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__polyglot-ko-12.8b/blob/main/results_2023-10-19T02-17-54.630291.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.04268036912751678,\n\
\ \"em_stderr\": 0.0020700565850232436,\n \"f1\": 0.09065960570469792,\n\
\ \"f1_stderr\": 0.002370421899236817,\n \"acc\": 0.2994953245415047,\n\
\ \"acc_stderr\": 0.0074273230901261535\n },\n \"harness|drop|3\":\
\ {\n \"em\": 0.04268036912751678,\n \"em_stderr\": 0.0020700565850232436,\n\
\ \"f1\": 0.09065960570469792,\n \"f1_stderr\": 0.002370421899236817\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \
\ \"acc_stderr\": 0.0010717793485492619\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5974743488555643,\n \"acc_stderr\": 0.013782866831703044\n\
\ }\n}\n```"
repo_url: https://huggingface.co/EleutherAI/polyglot-ko-12.8b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: [email protected]
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|arc:challenge|25_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_19T02_17_54.630291
path:
- '**/details_harness|drop|3_2023-10-19T02-17-54.630291.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-19T02-17-54.630291.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_19T02_17_54.630291
path:
- '**/details_harness|gsm8k|5_2023-10-19T02-17-54.630291.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-19T02-17-54.630291.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hellaswag|10_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T18:43:02.018732.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T18:43:02.018732.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T18:43:02.018732.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_19T02_17_54.630291
path:
- '**/details_harness|winogrande|5_2023-10-19T02-17-54.630291.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-19T02-17-54.630291.parquet'
- config_name: results
data_files:
- split: 2023_07_19T18_43_02.018732
path:
- results_2023-07-19T18:43:02.018732.parquet
- split: 2023_10_19T02_17_54.630291
path:
- results_2023-10-19T02-17-54.630291.parquet
- split: latest
path:
- results_2023-10-19T02-17-54.630291.parquet
---
# Dataset Card for Evaluation run of EleutherAI/polyglot-ko-12.8b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/EleutherAI/polyglot-ko-12.8b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_EleutherAI__polyglot-ko-12.8b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-19T02:17:54.630291](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__polyglot-ko-12.8b/blob/main/results_2023-10-19T02-17-54.630291.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.04268036912751678,
"em_stderr": 0.0020700565850232436,
"f1": 0.09065960570469792,
"f1_stderr": 0.002370421899236817,
"acc": 0.2994953245415047,
"acc_stderr": 0.0074273230901261535
},
"harness|drop|3": {
"em": 0.04268036912751678,
"em_stderr": 0.0020700565850232436,
"f1": 0.09065960570469792,
"f1_stderr": 0.002370421899236817
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.0010717793485492619
},
"harness|winogrande|5": {
"acc": 0.5974743488555643,
"acc_stderr": 0.013782866831703044
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ccdv/cnn_dailymail | ccdv | "2022-10-24T20:31:59Z" | 11,028 | 22 | [
"task_categories:summarization",
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"region:us",
"conditional-text-generation"
] | [
"summarization",
"text-generation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- text-generation
task_ids: []
paperswithcode_id: cnn-daily-mail-1
pretty_name: CNN / Daily Mail
tags:
- conditional-text-generation
---
**Copy of the [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) dataset fixing the "NotADirectoryError: [Errno 20]".**
# Dataset Card for CNN Dailymail Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [CNN / DailyMail Dataset repository](https://github.com/abisee/cnn-dailymail)
- **Paper:** [Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf), [Get To The Point: Summarization with Pointer-Generator Networks](https://www.aclweb.org/anthology/K16-1028.pdf)
- **Leaderboard:** [Papers with Code leaderboard for CNN / Dailymail Dataset](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail)
- **Point of Contact:** [Abigail See](mailto:[email protected])
### Dataset Summary
The CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering.
### Supported Tasks and Leaderboards
- 'summarization': [Versions 2.0.0 and 3.0.0 of the CNN / DailyMail Dataset](https://www.aclweb.org/anthology/K16-1028.pdf) can be used to train a model for abstractive and extractive summarization ([Version 1.0.0](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf) was developed for machine reading and comprehension and abstractive question answering). The model performance is measured by how high the output summary's [ROUGE](https://huggingface.co/metrics/rouge) score for a given article is when compared to the highlight as written by the original article author. [Zhong et al (2020)](https://www.aclweb.org/anthology/2020.acl-main.552.pdf) report a ROUGE-1 score of 44.41 when testing a model trained for extractive summarization. See the [Papers With Code leaderboard](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail) for more models.
### Languages
The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.
## Dataset Structure
### Data Instances
For each instance, there is a string for the article, a string for the highlights, and a string for the id. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=cnn_dailymail&config=3.0.0) to explore more examples.
```
{'id': '0054d6d30dbcad772e20b22771153a2a9cbeaf62',
'article': '(CNN) -- An American woman died aboard a cruise ship that docked at Rio de Janeiro on Tuesday, the same ship on which 86 passengers previously fell ill, according to the state-run Brazilian news agency, Agencia Brasil. The American tourist died aboard the MS Veendam, owned by cruise operator Holland America. Federal Police told Agencia Brasil that forensic doctors were investigating her death. The ship's doctors told police that the woman was elderly and suffered from diabetes and hypertension, according the agency. The other passengers came down with diarrhea prior to her death during an earlier part of the trip, the ship's doctors said. The Veendam left New York 36 days ago for a South America tour.'
'highlights': 'The elderly woman suffered from diabetes and hypertension, ship's doctors say .\nPreviously, 86 passengers had fallen ill on the ship, Agencia Brasil says .'}
```
The average token count for the articles and the highlights are provided below:
| Feature | Mean Token Count |
| ---------- | ---------------- |
| Article | 781 |
| Highlights | 56 |
### Data Fields
- `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
- `article`: a string containing the body of the news article
- `highlights`: a string containing the highlight of the article as written by the article author
### Data Splits
The CNN/DailyMail dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for Version 3.0.0 of the dataset.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 287,113 |
| Validation | 13,368 |
| Test | 11,490 |
## Dataset Creation
### Curation Rationale
Version 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels.
### Source Data
#### Initial Data Collection and Normalization
The data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015.
The code for the original data collection is available at <https://github.com/deepmind/rc-data>. The articles were downloaded using archives of <www.cnn.com> and <www.dailymail.co.uk> on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <https://cs.nyu.edu/~kcho/DMQA/>. An updated version of the code that does not anonymize the data is available at <https://github.com/abisee/cnn-dailymail>.
Hermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them.
#### Who are the source language producers?
The text was written by journalists at CNN and the Daily Mail.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Version 3.0 is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop models that can summarize long paragraphs of text in one or two sentences.
This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.
### Discussion of Biases
[Bordia and Bowman (2019)](https://www.aclweb.org/anthology/N19-3002.pdf) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'.
Because the articles were written by and for people in the US and the UK, they will likely present specifically US and UK perspectives and feature events that are considered relevant to those populations during the time that the articles were published.
### Other Known Limitations
News articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article [(Kryściński et al, 2019)](https://www.aclweb.org/anthology/D19-1051.pdf). [Chen et al (2016)](https://www.aclweb.org/anthology/P16-1223.pdf) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors.
It should also be noted that machine-generated summarizations, even when extractive, may differ in truth values when compared to the original articles.
## Additional Information
### Dataset Curators
The data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format.
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions.
The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <https://github.com/abisee/cnn-dailymail>. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040.
### Licensing Information
The CNN / Daily Mail dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@inproceedings{see-etal-2017-get,
title = "Get To The Point: Summarization with Pointer-Generator Networks",
author = "See, Abigail and
Liu, Peter J. and
Manning, Christopher D.",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P17-1099",
doi = "10.18653/v1/P17-1099",
pages = "1073--1083",
abstract = "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.",
}
```
```
@inproceedings{DBLP:conf/nips/HermannKGEKSB15,
author={Karl Moritz Hermann and Tomás Kociský and Edward Grefenstette and Lasse Espeholt and Will Kay and Mustafa Suleyman and Phil Blunsom},
title={Teaching Machines to Read and Comprehend},
year={2015},
cdate={1420070400000},
pages={1693-1701},
url={http://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend},
booktitle={NIPS},
crossref={conf/nips/2015}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
|
jacobbieker/eumetsat-cloudmask-iodc | jacobbieker | "2024-07-26T07:39:56Z" | 11,018 | 0 | [
"license:mit",
"doi:10.57967/hf/1639",
"region:us"
] | null | "2024-01-12T18:51:01Z" | ---
license: mit
---
|
Codec-SUPERB/librispeech_synth | Codec-SUPERB | "2024-01-15T14:57:31Z" | 10,996 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-04T04:21:43Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
splits:
- name: academicodec_hifi_16k_320d
num_bytes: 113116345974.686
num_examples: 292367
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 113116345974.686
num_examples: 292367
- name: academicodec_hifi_24k_320d
num_bytes: 169685346294.686
num_examples: 292367
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 113174576650.686
num_examples: 292367
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 113173372218.686
num_examples: 292367
- name: audiodec_24k_320d
num_bytes: 169835583482.686
num_examples: 292367
- name: original
num_bytes: 63678669918.686
num_examples: 292367
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 113186105690.686
num_examples: 292367
- name: dac_16k
num_bytes: 113185098868.686
num_examples: 292367
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 113186105690.686
num_examples: 292367
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 113186105690.686
num_examples: 292367
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 113186105690.686
num_examples: 292367
- name: dac_24k
num_bytes: 169767074932.686
num_examples: 292367
- name: speech_tokenizer_16k
num_bytes: 113255906934.686
num_examples: 292367
download_size: 1424205343315
dataset_size: 1704732744013.6042
configs:
- config_name: default
data_files:
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: original
path: data/original-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: dac_16k
path: data/dac_16k-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: dac_24k
path: data/dac_24k-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
---
# Dataset Card for "librispeech_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
XufengDuan/results | XufengDuan | "2024-10-31T08:14:36Z" | 10,984 | 0 | [
"license:mit",
"region:us"
] | null | "2024-08-03T10:58:54Z" | ---
license: mit
---
|
LWHYC/PASTA-Gen-30K | LWHYC | "2025-02-18T05:47:25Z" | 10,961 | 3 | [
"license:mit",
"arxiv:2502.06171",
"region:us"
] | null | "2025-01-28T09:35:45Z" | ---
license: mit
---

**Workflow of PASTA Model Development and Training Pipeline**. **a**, Overview of organs and lesion
types involved in PASTA training. **b**, Examples of lesions generated by PASTA-Gen from healthy organs. **c**, Lesion generation process pipeline of PASTA-Gen. **d**, Two-stage training of PASTA using the PASTA-Gen-30K
dataset.
[Model](https://github.com/LWHYC/PASTA), [Paper](https://arxiv.org/abs/2502.06171)
## Overview
PASTA-Gen-30K, a large-scale synthetic dataset of 30,000 CT volumes with precise lesion masks and structured textual reports from 15 lesion types (10 common malignancies and 5 benign lesions). It is an integral part of the [PASTA](https://github.com/LWHYC/PASTA) project.
It contains 2K samples for each lesion:
- Lung tumor
- Liver tumor
- Gallbladder cancer
- Pancreas tumor
- Esophageal Cancer
- Gastric cancer
- Colorectal cancer
- Kidney tumor
- Bladder cancer
- Bone metastasis
- Liver cyst
- Gallstone
- Pancreas cyst
- Kidney cyst
- Kidney stone
## Data Organization
Each sample in this dataset contains the following files:
- **`img.nii.gz`**: A synthetic CT scan featuring a target lesion. The image has dimensions of 280 × 280 × 280 voxels with a spacing of 1 × 1 × 1 mm.
- **`label.nii.gz`**: A synthetic label volume indicating the target lesion and the corresponding organ. The labeling convention is as follows:
- Organ: label value `1`
- Lesion: label value `2`
- **`total.nii.gz`**: Organ segmentation results generated using [TotalSegmentator v1](https://github.com/wasserth/TotalSegmentator/tree/v1.5.7). This file includes segmentation outputs for 104 organs. A full list of the segmented classes is available [here](https://github.com/wasserth/TotalSegmentator/tree/v1.5.7).
- **`type.json`**: A structured lesion report containing various attributes and their possible options. An overview of these attributes and options is illustrated in the image below.

## Citation
If you use our dataset, please cite:
```bibtex
@article{lei2025data,
title={A Data-Efficient Pan-Tumor Foundation Model for Oncology CT Interpretation},
author={Lei, Wenhui and Chen, Hanyu and Zhang, Zitian and Luo, Luyang and Xiao, Qiong and Gu, Yannian and Gao, Peng and Jiang, Yankai and Wang, Ci and Wu, Guangtao and others},
journal={arXiv preprint arXiv:2502.06171},
year={2025}
}
```
and please also consider cite Totalsegmentator. Thanks for their great work:
```bibtex
@article{wasserthal2023totalsegmentator,
title={TotalSegmentator: robust segmentation of 104 anatomic structures in CT images},
author={Wasserthal, Jakob and Breit, Hanns-Christian and Meyer, Manfred T and Pradella, Maurice and Hinck, Daniel and Sauter, Alexander W and Heye, Tobias and Boll, Daniel T and Cyriac, Joshy and Yang, Shan and others},
journal={Radiology: Artificial Intelligence},
volume={5},
number={5},
year={2023},
publisher={Radiological Society of North America}
}
``` |
MathLLMs/MathVision | MathLLMs | "2025-03-10T12:48:32Z" | 10,960 | 48 | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_categories:visual-question-answering",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:expert-generated",
"language_creators:found",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2402.14804",
"arxiv:2501.12599",
"region:us",
"mathematics",
"reasoning",
"multi-modal-qa",
"math-qa",
"figure-qa",
"geometry-qa",
"math-word-problem",
"textbook-qa",
"vqa",
"geometry-diagram",
"synthetic-scene",
"chart",
"plot",
"scientific-figure",
"table",
"function-plot",
"abstract-scene",
"puzzle-test",
"document-image",
"science"
] | [
"question-answering",
"multiple-choice",
"visual-question-answering",
"text-generation"
] | "2024-02-22T19:14:42Z" | ---
license: mit
annotations_creators:
- expert-generated
- found
language_creators:
- expert-generated
- found
task_categories:
- question-answering
- multiple-choice
- visual-question-answering
- text-generation
language:
- en
tags:
- mathematics
- reasoning
- multi-modal-qa
- math-qa
- figure-qa
- geometry-qa
- math-word-problem
- textbook-qa
- vqa
- geometry-diagram
- synthetic-scene
- chart
- plot
- scientific-figure
- table
- function-plot
- abstract-scene
- puzzle-test
- document-image
- science
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: testmini
path: data/testmini-*
pretty_name: MATH-V
size_categories:
- 1K<n<10K
---
# Measuring Multimodal Mathematical Reasoning with the MATH-Vision Dataset
[[💻 Github](https://github.com/mathllm/MATH-V/)] [[🌐 Homepage](https://mathllm.github.io/mathvision/)] [[📊 Leaderboard ](https://mathllm.github.io/mathvision/#leaderboard )] [[🔍 Visualization](https://mathllm.github.io/mathvision/#visualization)] [[📖 ArXiv Paper](https://arxiv.org/pdf/2402.14804.pdf)]
## 🚀 Data Usage
<!-- **We have observed that some studies have used our MATH-Vision dataset as a training set.**
⚠️ **As clearly stated in our paper: *"The MATH-V dataset is not supposed, though the risk exists, to be used to train models for cheating. We intend for researchers to use this dataset to better evaluate LMMs’ mathematical reasoning capabilities and consequently facilitate future studies in this area."***
⚠️⚠️⚠️ **In the very rare situation that there is a compelling reason to include MATH-V in your training set, we strongly urge that the ***testmini*** subset be excluded from the training process!**
-->
```python
from datasets import load_dataset
dataset = load_dataset("MathLLMs/MathVision")
print(dataset)
```
## 💥 News
- **[2025.03.10]** 💥 **Kimi k1.6 Preview 🥇 Sets New SOTA on MATH-V with 53.29%!** See the full [leaderboard](https://mathllm.github.io/mathvision/#leaderboard).
- **[2025.02.28]** 💥 **Doubao-1.5-pro Sets New SOTA on MATH-V with 48.62%!** Read more on the [Doubao blog](https://team.doubao.com/zh/special/doubao_1_5_pro).
- **[2025.01.26]** 🚀 [Qwen2.5-VL-72B](http://qwenlm.github.io/blog/qwen2.5-vl/) achieves **38.1%**, establishing itself as the best-performing one in open-sourced models. 🎉 Congratulations!
- **[2025.01.22]** 💥 **Kimi k1.5 achieves new SOTA** on MATH-Vision with **38.6%**! Learn more at the [Kimi k1.5 report](https://arxiv.org/pdf/2501.12599).
- **[2024-09-27]** **MATH-V** is accepted by NeurIPS DB Track, 2024! 🎉🎉🎉
- **[2024-08-29]** 🔥🔥🔥 Qwen2-VL-72B achieves new open-sourced SOTA on MATH-Vision with 25.9! 🎉 Congratulations! Learn more at the [Qwen2-VL blog](https://qwenlm.github.io/blog/qwen2-vl/).
- **[2024-07-19]** [open-compass/VLMEvalKit](https://github.com/open-compass/VLMEvalKit) now supports **MATH-V**, utilizing LLMs for more accurate answer extraction!🔥🔥
- **[2024-05-19]** OpenAI's **GPT-4o** scores **30.39%** on **MATH-V**, considerable advancement in short time! 💥
- **[2024-03-01]** **InternVL-Chat-V1-2-Plus** achieves **16.97%**, establishing itself as the new best-performing open-sourced model. 🎉 Congratulations!
- **[2024-02-23]** Our dataset is now accessible at [huggingface](https://huggingface.co/datasets/MathLLMs/MathVision).
- **[2024-02-22]** The top-performing model, **GPT-4V** only scores **23.98%** on **MATH-V**, while human performance is around **70%**.
- **[2024-02-22]** Our paper is now accessible at [ArXiv Paper](https://arxiv.org/abs/2402.14804).
## 👀 Introduction
Recent advancements in Large Multimodal Models (LMMs) have shown promising results in mathematical reasoning within visual contexts, with models approaching human-level performance on existing benchmarks such as MathVista. However, we observe significant limitations in the diversity of questions and breadth of subjects covered by these benchmarks. To address this issue, we present the MATH-Vision (MATH-V) dataset, a meticulously curated collection of 3,040 high-quality mathematical problems with visual contexts sourced from real math competitions. Spanning 16 distinct mathematical disciplines and graded across 5 levels of difficulty, our dataset provides a comprehensive and diverse set of challenges for evaluating the mathematical reasoning abilities of LMMs.
<p align="center">
<img src="https://raw.githubusercontent.com/mathvision-cuhk/MathVision/main/assets/figures/figure1_new.png" width="66%"> The accuracies of four prominent Large Multimodal Models (LMMs), random chance, and human <br>
performance are evaluated on our proposed <b>MATH-Vision (MATH-V)</b> across 16 subjects.
</p>
<br>
Through extensive experimentation, we unveil a notable performance gap between current LMMs and human performance on MATH-V, underscoring the imperative for further advancements in LMMs.
You can refer to the [project homepage](https://mathvision-cuhk.github.io/) for more details.
## 🏆 Leaderboard
The leaderboard is available [here](https://mathvision-cuhk.github.io/#leaderboard).
We are commmitted to maintain this dataset and leaderboard in the long run to ensure its quality!
🔔 If you find any mistakes, please paste the question_id to the issue page, we will modify it accordingly.
## 📐 Dataset Examples
Some examples of MATH-V on three subjects: analytic geometry, topology, and graph theory.
<details>
<summary>Analytic geometry</summary><p align="center">
<img src="https://raw.githubusercontent.com/mathvision-cuhk/MathVision/main/assets/examples/exam_analytic_geo.png" width="60%"> <br>
</p></details>
<details>
<summary>Topology</summary><p align="center">
<img src="https://raw.githubusercontent.com/mathvision-cuhk/MathVision/main/assets/examples/exam_topology.png" width="60%"> <br>
</p></details>
<details>
<summary>Graph Geometry</summary><p align="center">
<img src="https://raw.githubusercontent.com/mathvision-cuhk/MathVision/main/assets/examples/exam_graph.png" width="60%"> <br>
</p></details>
## 📑 Citation
If you find this benchmark useful in your research, please consider citing this BibTex:
```
@inproceedings{
wang2024measuring,
title={Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset},
author={Ke Wang and Junting Pan and Weikang Shi and Zimu Lu and Houxing Ren and Aojun Zhou and Mingjie Zhan and Hongsheng Li},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=QWTCcxMpPA}
}
```
|
Cohere/miracl-en-corpus-22-12 | Cohere | "2023-02-06T11:54:52Z" | 10,930 | 2 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-retrieval"
] | "2023-02-02T23:21:21Z" | ---
annotations_creators:
- expert-generated
language:
- en
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-en-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
elsaEU/ELSA1M_track1 | elsaEU | "2023-08-27T08:01:57Z" | 10,925 | 3 | [
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-07-18T16:50:36Z" | ---
elsaEU--ELSA1M_track1:
description: ''
citation: ''
homepage: ''
license: ''
features:
image:
decode: true
id: null
dtype: Image
id:
dtype: string
id: null
_type: Value
original_prompt:
dtype: string
id: null
_type: Value
positive_prompt:
dtype: string
id: null
_type: Value
negative_prompt:
dtype: string
id: null
_type: Value
model:
dtype: string
id: null
_type: Value
nsfw:
dtype: string
id: null
_type: Value
url_real_image:
dtype: string
id: null
_type: Value
filepath:
dtype: string
id: null
_type: Value
aspect_ratio:
feature:
dtype: int64
id: null
_type: Value
length: -1
id: null
_type: Sequence
post_processed: null
supervised_keys: null
task_templates: null
builder_name: imagefolder
config_name: default
version:
version_str: 0.0.0
description: null
major: 0
minor: 0
patch: 0
splits:
train:
name: train
num_bytes: 445926712527.43
num_examples: 992655
dataset_name: ELSA1M_track1
download_checksums: null
download_size: 223034360161
post_processing_size: null
dataset_size: 445926712527.43
size_in_bytes: 668961072688.4299
license: cc-by-4.0
---
# ELSA - Multimedia use case

**ELSA Multimedia is a large collection of Deep Fake images, generated using diffusion models**
### Dataset Summary
This dataset was developed as part of the EU project ELSA. Specifically for the Multimedia use-case.
Official webpage: https://benchmarks.elsa-ai.eu/
This dataset aims to develop effective solutions for detecting and mitigating the spread of deep fake images in multimedia content. Deep fake images, which are highly realistic and deceptive manipulations, pose significant risks to privacy, security, and trust in digital media. This dataset can be used to train robust and accurate models that can identify and flag instances of deep fake images.
### ELSA versions
| Name | Description | Link |
| ------------- | ------------- | ---------------------|
| ELSA1M_track1 | Dataset of 1M images generated using diffusion model | https://huggingface.co/datasets/elsaEU/ELSA1M_track1 |
| ELSA500k_track2 | Dataset of 500k images generated using diffusion model with diffusion attentive attribution maps [1] | https://huggingface.co/datasets/elsaEU/ELSA500k_track2 |
```python
from datasets import load_dataset
elsa_data = load_dataset("elsaEU/ELSA1M_track1", split="train", streaming=True)
for sample in elsa_data:
image = sample.pop("image")
metadata = sample
```
Using <a href="https://huggingface.co/docs/datasets/stream">streaming=True</a> lets you work with the dataset without downloading it.
## Dataset Structure
Each parquet file contains nearly 1k images and a JSON file with metadata.
The Metadata for generated images are:
- ID: Laion image ID
- original_prompt: Laion Prompt
- positive_prompt: positive prompt used for image generation
- negative_prompt: negative prompt used for image generation
- model: model used for the image generation
- nsfw: nsfw tag from Laion
- url_real_image: Url of the real image associated to the same prompt
- filepath: filepath of the fake image
- aspect_ratio: aspect ratio of the generated image
### Dataset Curators
- Leonardo Labs ([email protected])
- UNIMORE (https://aimagelab.ing.unimore.it/imagelab/) |
WenhaoWang/VideoUFO | WenhaoWang | "2025-03-06T10:40:21Z" | 10,904 | 12 | [
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:image-to-video",
"task_categories:image-to-image",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2503.01739",
"region:us",
"video-generation",
"text-to-video-dataset"
] | [
"text-to-video",
"text-to-image",
"image-to-video",
"image-to-image"
] | "2025-02-18T04:18:29Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 1M<n<10M
task_categories:
- text-to-video
- text-to-image
- image-to-video
- image-to-image
dataset_info:
features:
- name: ID
dtype: string
- name: Middle_Frame
dtype: image
- name: Topic
dtype: string
- name: Detailed_Caption
dtype: string
- name: Brief_Caption
dtype: string
- name: Start_Time
dtype: string
- name: End_Time
dtype: string
- name: Aesthetic_Quality
dtype: float32
- name: Background_Consistency
dtype: float32
- name: Dynamic_Degree
dtype: float32
- name: Imaging_Quality
dtype: float32
- name: Motion_Smoothness
dtype: float32
- name: Subject_Consistency
dtype: float32
splits:
- name: Full
num_bytes: 46459680631.0
num_examples: 1091712
download_size: 91635996940
dataset_size: 92919361262.0
configs:
- config_name: default
data_files:
- split: Full
path: data/Full-*
tags:
- video-generation
- text-to-video-dataset
---
# Summary
This is the dataset proposed in our paper [**VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation**](https://huggingface.co/papers/2503.01739).
VideoUFO is the first dataset curated in alignment with real-world users’ focused topics for text-to-video generation. Specifically, the dataset comprises over 1.09 million video clips spanning 1,291 topics. Here, we select the top 20 most popular topics for illustration.
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VideoUFO/resolve/main/assets/teasor.png" width="1000">
</p>
# Visual comparison
Visual comparisons between our approach (MVDiT-VideoUFO) and other text-to-video models. The model trained on VideoUFO outperforms the alternatives in generating user-focused topics.
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VideoUFO/resolve/main/assets/compare.png" width="1000">
</p>
# Data point
Each data point in our VideoUFO includes a video clip, an ID, a topic, start and end times, a brief caption, and a detailed caption. Beyond that, we evaluate each clip with six different video quality scores from VBench.
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VideoUFO/resolve/main/assets/datapoint.png" width="1000">
</p>
# Statistics
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VideoUFO/resolve/main/assets/stat_a.png" width="1000">
</p>
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VideoUFO/resolve/main/assets/stat_b.png" width="1000">
</p>
# Download
For users in mainland China, try setting `export HF_ENDPOINT=https://hf-mirror.com` to successfully download the datasets.
## Download the metadata of VideoUFO
```python
from datasets import load_dataset
ds = load_dataset("WenhaoWang/VideoUFO", split='Full', streaming=False)
```
or
```
wget https://huggingface.co/datasets/WenhaoWang/VideoUFO/resolve/main/VideoUFO.csv
```
## Download the videos in VideoUFO
Please note that due to bandwidth costs, we compress the released videos. However, the total size is still approximately 800GB.
```python
from huggingface_hub import hf_hub_download
for i in range(1,201):
hf_hub_download(repo_id="WenhaoWang/VideoUFO", filename="VideoUFO_tar/VideoUFO_%d.tar"%i, repo_type="dataset")
```
# Comparison with other datasets
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VideoUFO/resolve/main/assets/comparison_datasets.png" width="1000">
</p>
# License
The videos in our VideoUFO are licensed under the [CC BY 4.0 license](https://creativecommons.org/licenses/by/4.0/deed.en).
# Curators
VideoUFO is created by [Wenhao Wang](https://wangwenhao0716.github.io/) and Professor [Yi Yang](https://scholar.google.com/citations?user=RMSuNFwAAAAJ&hl=zh-CN).
# Citation
```
@article{wang2025VideoUFO,
title={VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation},
author={Wang, Wenhao and Yang, Yi},
booktitle={arXiv preprint arXiv:2503.01739},
year={2025}
}
```
# Contact
If you have any questions, feel free to contact Wenhao Wang ([email protected]). |
mitermix/chess-selfplay | mitermix | "2023-05-22T06:58:34Z" | 10,877 | 6 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-05-18T08:56:25Z" | ---
license: apache-2.0
---
|
QingyiSi/Alpaca-CoT | QingyiSi | "2023-09-14T08:52:10Z" | 10,875 | 726 | [
"language:en",
"language:zh",
"language:ml",
"license:apache-2.0",
"region:us",
"Instruction",
"Cot"
] | null | "2023-03-25T14:58:30Z" | ---
language:
- en
- zh
- ml
tags:
- Instruction
- Cot
license: apache-2.0
datasets:
- dataset1
- dataset2
---
# Instruction-Finetuning Dataset Collection (Alpaca-CoT)
This repository will continuously collect various instruction tuning datasets. And we standardize different datasets into the same format, which can be directly loaded by the [code](https://github.com/PhoebusSi/alpaca-CoT) of Alpaca model.
We also have conducted empirical study on various instruction-tuning datasets based on the Alpaca model, as shown in [https://github.com/PhoebusSi/alpaca-CoT](https://github.com/PhoebusSi/alpaca-CoT).
If you think this dataset collection is helpful to you, please `like` this dataset and `star` our [github project](https://github.com/PhoebusSi/alpaca-CoT)!
You are in a warm welcome to provide us with any non-collected instruction-tuning datasets (or their sources). We will uniformly format them, train Alpaca model with these datasets and open source the model checkpoints.
# Contribute
Welcome to join us and become a contributor to this project!
If you want to share some datasets, adjust the data in the following format:
```
example.json
[
{"instruction": instruction string,
"input": input string, # (may be empty)
"output": output string}
]
```
Folder should be like this:
```
Alpaca-CoT
|
|----example
| |
| |----example.json
| |
| ----example_context.json
...
```
Create a new pull request in [Community
](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/discussions) and publish your branch when you are ready. We will merge it as soon as we can.
# Data Usage and Resources
## Data Format
All data in this folder is formatted into the same templates, where each sample is as follows:
```
[
{"instruction": instruction string,
"input": input string, # (may be empty)
"output": output string}
]
```
## alpaca
#### alpaca_data.json
> This dataset is published by [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca). It contains 52K English instruction-following samples obtained by [Self-Instruction](https://github.com/yizhongw/self-instruct) techniques.
#### alpaca_data_cleaned.json
> This dataset is obtained [here](https://github.com/tloen/alpaca-lora). It is a revised version of `alpaca_data.json` by stripping of various tokenization artifacts.
## alpacaGPT4
#### alpaca_gpt4_data.json
> This dataset is published by [Instruction-Tuning-with-GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM).
It contains 52K English instruction-following samples generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
#### alpaca_gpt4_data_zh.json
> This dataset is generated by GPT-4 using Chinese prompts translated from Alpaca by ChatGPT.
<!-- ## belle_cn
#### belle_data_cn.json
This dataset is published by [BELLE](https://github.com/LianjiaTech/BELLE). It contains 0.5M Chinese instruction-following samples, which is also generated by [Self-Instruction](https://github.com/yizhongw/self-instruct) techniques.
#### belle_data1M_cn.json
This dataset is published by [BELLE](https://github.com/LianjiaTech/BELLE). It contains 1M Chinese instruction-following samples. The data of `belle_data_cn.json` and `belle_data1M_cn.json` are not duplicated. -->
## Chain-of-Thought
#### CoT_data.json
> This dataset is obtained by formatting the combination of 9 CoT datasets published by [FLAN](https://github.com/google-research/FLAN). It contains 9 CoT tasks involving 74771 samples.
#### CoT_CN_data.json
> This dataset is obtained by tranlating `CoT_data.json` into Chinese, using Google Translate(en2cn).
#### formatted_cot_data folder
> This folder contains the formatted English data for each CoT dataset.
#### formatted_cot_data folder
> This folder contains the formatted Chinese data for each CoT dataset.
## CodeAlpaca
#### code_alpaca.json
> This dataset is published by [codealpaca](https://github.com/sahil280114/codealpaca). It contains code generation task involving 20022 samples.
## finance
#### finance_en.json
> This dataset is collected from [here](https://huggingface.co/datasets/gbharti/finance-alpaca). It contains 68912 financial related instructions in English.
## firefly
#### firefly.json
> his dataset is collected from [here](https://github.com/yangjianxin1/Firefly). It contains 1649398 chinese instructions in 23 nlp tasks.
## GPT4all
#### gpt4all.json
> This dataset is collected from [here](https://github.com/nomic-ai/gpt4all). It contains 806199 en instructions in code, storys and dialogs tasks.
#### gpt4all_without_p3.json
> gpt4all without Bigscience/P3, contains 437605 samples.
## GPTeacher
#### GPTeacher.json
> This dataset is collected from [here](https://github.com/teknium1/GPTeacher). It contains 29013 en instructions generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer.
## Guanaco
#### GuanacoDataset.json
> This dataset is collected from [here](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset). It contains 534610 en instructions generated by text-davinci-003 upon 175 tasks from the Alpaca model by providing rewrites of seed tasks in different languages and adding new tasks specifically designed for English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition.
#### Guanaco_additional_Dataset.json
> A new additional larger dataset for different languages.
## HC3
#### HC3_ChatGPT.json/HC3_Human.json
> This dataset is collected from [here](https://huggingface.co/datasets/Hello-SimpleAI/HC3). It contains 37175 en/zh instructions generated by ChatGPT and human.
#### HC3_ChatGPT_deduplication.json/HC3_Human_deduplication.json
> HC3 dataset without deduplication instructions.
## instinwild
#### instinwild_en.json & instinwild_cn.json
> The two datasets are obtained [here](https://github.com/XueFuzhao/InstructionWild). It contains 52191 English and 51504 Chinese instructions, which are collected from Twitter, where users tend to share their interesting prompts of mostly generation, open QA, and mind-storm types. (Colossal AI used these datasets to train the ColossalChat model.)
## instruct
#### instruct.json
> The two datasets are obtained [here](https://huggingface.co/datasets/swype/instruct). It contains 888969 English instructions, which are caugmentation performed using the advanced NLP tools provided by AllenAI.
## Natural Instructions
#### natural-instructions-1700tasks.zip
> This dataset is obtained [here](https://github.com/allenai/natural-instructions). It contains 5040134 instructions, which are collected from diverse nlp tasks
## prosocial dialog
#### natural-instructions-1700tasks.zip
> This dataset is obtained [here](https://huggingface.co/datasets/allenai/prosocial-dialog). It contains 165681 English instructions, which are produuced by GPT-3 rewrites questions and humans feedback
## xP3
#### natural-instructions-1700tasks.zip
> This dataset is obtained [here](https://huggingface.co/datasets/bigscience/xP3). It contains 78883588 instructions, which are collected by prompts & datasets across 46 of languages & 16 NLP tasks
## Chinese-instruction-collection
> all datasets of Chinese instruction collection
## combination
#### alcapa_plus_belle_data.json
> This dataset is the combination of English `alpaca_data.json` and Chinese `belle_data_cn.json`.
#### alcapa_plus_cot_data.json
> This dataset is the combination of English `alpaca_data.json` and CoT `CoT_data.json`.
#### alcapa_plus_belle_cot_data.json
> This dataset is the combination of English `alpaca_data.json`, Chinese `belle_data_cn.json` and CoT `CoT_data.json`.
## Citation
Please cite the repo if you use the data collection, code, and experimental findings in this repo.
```
@misc{alpaca-cot,
author = {Qingyi Si, Zheng Lin },
school = {Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China},
title = {Alpaca-CoT: An Instruction Fine-Tuning Platform with Instruction Data Collection and Unified Large Language Models Interface},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/PhoebusSi/alpaca-CoT}},
}
```
Cite the original Stanford Alpaca, BELLE and FLAN papers as well, please.
|
chcorbi/helvipad | chcorbi | "2025-03-24T09:12:06Z" | 10,851 | 6 | [
"task_categories:depth-estimation",
"source_datasets:original",
"license:cc0-1.0",
"size_categories:10K<n<100K",
"modality:image",
"region:us",
"omnidirectional",
"stereo-matching",
"depth-estimation",
"image"
] | [
"depth-estimation"
] | "2024-12-09T19:13:40Z" | ---
license: cc0-1.0
task_categories:
- depth-estimation
tags:
- omnidirectional
- stereo-matching
- depth-estimation
- image
source_datasets:
- original
pretty_name: Helvipad
size_categories:
- 10K<n<100K
paperswithcode_id: helvipad
dataset_info:
config_name: default
features:
- name: images_top
dtype: image
- name: images_bottom
dtype: image
- name: depth_maps
dtype: image
- name: disparity_maps
dtype: image
- name: depth_maps_augmented
dtype: image
- name: disparity_maps_augmented
dtype: image
splits:
- name: train
num_examples: 26412
- name: val
num_examples: 2995
- name: test
num_examples: 10146
configs:
- config_name: default
data_files:
- split: train
path: train/**
- split: val
path: val/**
- split: test
path: test/**
default: true
---
# HELVIPAD: A Real-World Dataset for Omnidirectional Stereo Depth Estimation [](https://vita-epfl.github.io/Helvipad/)
The <span style="font-variant: small-caps;">Helvipad</span> dataset is a real-world stereo dataset designed for omnidirectional depth estimation. It comprises 39,553 paired equirectangular images captured using a top-bottom 360° camera setup and corresponding pixel-wise depth and disparity labels derived from LiDAR point clouds. The dataset spans diverse indoor and outdoor scenes under varying lighting conditions, including night-time environments.
## News
- **[16/02/2025]** Helvipad has been accepted to CVPR 2025! 🎉🎉
- **[CVPR Update – 16/03/2025]** If you already downloaded the dataset, we have applied a small but important update:
- **train/val split**: the previous `train/` folder is now split into `train/` and `val/` subsets.
- **bottom image fix** (`images_bottom/`): a minor horizontal shift correction has been applied to bottom images in `train/`, `val/`, and `test/`.
- **disparity and depth maps adjustment** (`disparity_maps/`, `depth_maps/`, `disparity_maps_augmented/`, `depth_maps_augmented/`): a small vertical shift was corrected in both standard and augmented depth and disparity maps in `train/`, `val/`, and `test/`.
We have re-run all experiments, and the updated dataset produces similar results.
## Dataset Structure
The dataset is organized into training, validation and testing subsets with the following structure:
```
helvipad/
├── train/
│ ├── depth_maps # Depth maps generated from LiDAR data
│ ├── depth_maps_augmented # Augmented depth maps using depth completion
│ ├── disparity_maps # Disparity maps computed from depth maps
│ ├── disparity_maps_augmented # Augmented disparity maps using depth completion
│ ├── images_top # Top-camera RGB images
│ ├── images_bottom # Bottom-camera RGB images
│ ├── LiDAR_pcd # Original LiDAR point cloud data
├── val/
│ ├── depth_maps # Depth maps generated from LiDAR data
│ ├── depth_maps_augmented # Augmented depth maps using depth completion
│ ├── disparity_maps # Disparity maps computed from depth maps
│ ├── disparity_maps_augmented # Augmented disparity maps using depth completion
│ ├── images_top # Top-camera RGB images
│ ├── images_bottom # Bottom-camera RGB images
│ ├── LiDAR_pcd # Original LiDAR point cloud data
├── test/
│ ├── depth_maps # Depth maps generated from LiDAR data
│ ├── depth_maps_augmented # Augmented depth maps using depth completion (only for computing LRCE)
│ ├── disparity_maps # Disparity maps computed from depth maps
│ ├── disparity_maps_augmented # Augmented disparity maps using depth completion (only for computing LRCE)
│ ├── images_top # Top-camera RGB images
│ ├── images_bottom # Bottom-camera RGB images
│ ├── LiDAR_pcd # Original LiDAR point cloud data
```
The dataset repository also includes:
- `helvipad_utils.py`: utility functions for reading depth and disparity maps, converting disparity to depth, and handling disparity values in pixels and degrees;
- `calibration.json`: intrinsic and extrinsic calibration parameters for the stereo cameras and LiDAR sensor.
## Benchmark
We evaluate the performance of multiple state-of-the-art and popular stereo matching methods, both for standard and 360° images. All models are trained on a single NVIDIA A100 GPU with
the largest possible batch size to ensure comparable use of computational resources.
| Method | Stereo Setting | Disp-MAE (°) | Disp-RMSE (°) | Disp-MARE | Depth-MAE (m) | Depth-RMSE (m) | Depth-MARE | Depth-LRCE (m) |
|--------------------|-------------------|---------------|----------------|------------|----------------|----------------|-----------------|---------------------|
| PSMNet | conventional | 0.286 | 0.496 | 0.248 | 2.509 | 5.673 | 0.176 | 1.809 |
| 360SD-Net | omnidirectional | 0.224 | 0.419 | 0.191 | 2.122 | 5.077 | 0.152 | 0.904 |
| IGEV-Stereo | conventional | 0.225 | 0.423 | 0.172 | 1.860 | 4.447 | 0.146 | 1.203 |
| 360-IGEV-Stereo | omnidirectional | **0.188** | **0.404** | **0.146** | **1.720** | **4.297** | **0.130** | **0.388** |
## Project Page
For more information, visualizations, and updates, visit the **[project page](https://vita-epfl.github.io/Helvipad/)**.
## License
This dataset is licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by the [EPFL Center for Imaging](https://imaging.epfl.ch/) through a Collaborative Imaging Grant.
We thank the VITA lab members for their valuable feedback, which helped to enhance the quality of this manuscript.
We also express our gratitude to Dr. Simone Schaub-Meyer and Oliver Hahn for their insightful advice during the project's final stages.
## Citation
If you use the Helvipad dataset in your research, please cite our paper:
```bibtex
@inproceedings{zayene2025helvipad,
author = {Zayene, Mehdi and Endres, Jannik and Havolli, Albias and Corbière, Charles and Cherkaoui, Salim and Ben Ahmed Kontouli, Alexandre and Alahi, Alexandre},
title = {Helvipad: A Real-World Dataset for Omnidirectional Stereo Depth Estimation},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2025}
}
``` |
Neel-Gupta/owt-processed_512 | Neel-Gupta | "2024-12-16T16:10:54Z" | 10,834 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-16T15:22:51Z" | ---
dataset_info:
features:
- name: text
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 281226340096
num_examples: 44656
download_size: 30432385846
dataset_size: 281226340096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PRIME-RL/Eurus-2-RL-Data | PRIME-RL | "2025-02-19T12:14:49Z" | 10,833 | 30 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.01456",
"arxiv:2412.01981",
"region:us"
] | null | "2024-12-31T07:01:21Z" | ---
license: mit
---
# Eurus-2-RL-Data
## Links
- 📜 [Paper](https://arxiv.org/abs/2502.01456)
- 📜 [Blog](https://curvy-check-498.notion.site/Process-Reinforcement-through-Implicit-Rewards-15f4fcb9c42180f1b498cc9b2eaf896f)
- 🤗 [PRIME Collection](https://huggingface.co/PRIME-RL)
## Introduction
Eurus-2-RL-Data is a high-quality RL training dataset of mathematics and coding problems with outcome verifiers (LaTeX answers for math and test cases for coding).
- For math, we source from [NuminaMath-CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT). The problems span from Chinese high school mathematics to International Mathematical Olympiad competition questions.
- For coding, we source from [APPS](https://huggingface.co/datasets/codeparrot/apps), [CodeContests](https://huggingface.co/datasets/deepmind/code_contests), [TACO](https://huggingface.co/datasets/BAAI/TACO), and [Codeforces](https://huggingface.co/datasets/MatrixStudio/Codeforces-Python-Submissions). The problems are mainly programming competition level.
To further increase data quality, we conduct detailed cleaning and filtering.
- For math, we use advanced reasoning models like [Qwen-QwQ](https://huggingface.co/Qwen/QwQ-32B-Preview) to filter out problems that are unsolvable, unmatchable, or with incorrect answers. We also reformat multiple-choice questions to open questions.
- For coding, we mainly filter out duplicated problems.
Detailed data preprocessing can be found [here](https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data#detailed-rl-data-preprocessing). Finally, we retain **455k** math problems and **26k** coding problems.
## Usage
```python
from datasets import load_dataset
ds = load_dataset("PRIME-RL/Eurus-2-RL-Data")
print(ds)
# DatasetDict({
# train: Dataset({
# features: ['data_source', 'prompt', 'ability', 'reward_model', 'extra_info'],
# num_rows: 480537
# })
# validation: Dataset({
# features: ['data_source', 'prompt', 'ability', 'reward_model', 'extra_info'],
# num_rows: 2048
# })
# })
```
## Statistics
| | Train | Validation |
| ------ | ------ | ---------- |
| Math | 455261 | 1024 |
| Coding | 25276 | 1024 |
## Data Example
Math
```json
{
'data_source': 'numina_olympiads',
'prompt': array([
{'content': '\nWhen tackling complex reasoning tasks, you have access to the following actions. Use them as needed to progress through your thought process.\n\n[ASSESS]\n\n[ADVANCE]\n\n[VERIFY]\n\n[SIMPLIFY]\n\n[SYNTHESIZE]\n\n[PIVOT]\n\n[OUTPUT]\n\nYou should strictly follow the format below:\n\n[ACTION NAME]\n\n# Your action step 1\n\n# Your action step 2\n\n# Your action step 3\n\n...\n\nNext action: [NEXT ACTION NAME]\n\n', 'role': 'system'},
{'content': 'Find the matrix of the operator $\\widehat{A}$ in the basis $\\mathbf{e}_{1}^{\\prime}, \\mathbf{e}_{2}^{\\prime}, \\mathbf{e}_{3}^{\\prime}$, where\n\n$$\n\\begin{aligned}\n& \\mathbf{e}_{1}^{\\prime}=\\mathbf{e}_{1}+\\mathbf{e}_{2}+2 \\mathbf{e}_{3}, \\\\\n& \\mathbf{e}_{2}^{\\prime}=2 \\mathbf{e}_{1}-\\mathbf{e}_{2} \\\\\n& \\mathbf{e}_{3}^{\\prime}=-\\mathbf{e}_{1}+\\mathbf{e}_{2}+\\mathbf{e}_{3},\n\\end{aligned}\n$$\n\nif in the basis $\\mathbf{e}_{1}, \\mathbf{e}_{2}, \\mathbf{e}_{3}$ its matrix is given by\n\n$$\nA_{\\mathbf{e}}=\\left(\\begin{array}{rrr}\n2 & 0 & -1 \\\\\n0 & 1 & -2 \\\\\n-1 & 2 & 0\n\\end{array}\\right)\n$$\n\nPresent the answer in LaTex format: \\boxed{Your answer}', 'role': 'user'}],
dtype=object),
'ability': 'math',
'reward_model': {'ground_truth': '\\begin{pmatrix}\n -7 & 6 & -8 \\\\\n 11 & -9 & 12 \\\\\n 15 & -16 & 19\n \\end{pmatrix}', 'style': 'rule'},
'extra_info': {'index': 0, 'split': 'dummy'}
}
```
Coding
```json
{
'data_source': 'taco',
'prompt': array([
{'content': '\nWhen tackling complex reasoning tasks, you have access to the following actions. Use them as needed to progress through your thought process.\n\n[ASSESS]\n\n[ADVANCE]\n\n[VERIFY]\n\n[SIMPLIFY]\n\n[SYNTHESIZE]\n\n[PIVOT]\n\n[OUTPUT]\n\nYou should strictly follow the format below:\n\n[ACTION NAME]\n\n# Your action step 1\n\n# Your action step 2\n\n# Your action step 3\n\n...\n\nNext action: [NEXT ACTION NAME]\n\n', 'role': 'system'},
{'content': 'Xander Cage has a list of cities he can visit on his new top-secret mission. He represents each city as a tuple of $(latitude,longitude,height,points)$. The values of $latitude$, $longitude$, and $height$ are distinct across all cities.\n\nWe define a mission as a sequence of cities, $c_1,c_2,c_3,\\ldots,c_k$, that he visits. We define the total $\\text{points}$ of such a mission to be the sum of the $\\text{points}$ of all the cities in his mission list.\n\nBeing eccentric, he abides by the following rules on any mission:\n\nHe can choose the number of cities he will visit (if any).\nHe can start the mission from any city.\nHe visits cities in order of strictly increasing $height$.\nThe absolute difference in $latitude$ between adjacent visited cities in his mission must be at most $d_l\\textbf{at}$.\nThe absolute difference in $longitude$ between adjacent visited cities in his mission must be at most $d_long$.\n\nGiven $\\boldsymbol{d\\text{_lat}}$, $d\\text{_long}$, and the definitions for $n$ cities, find and print the maximum possible total $\\text{points}$ that Xander can earn on a mission.\n\nInput Format\n\nThe first line contains three space-separated integers describing the respective values of $n$, $\\boldsymbol{d\\text{_lat}}$, and $d\\text{_long}$. \n\nEach line $\\boldsymbol{i}$ of the $n$ subsequent lines contains four space-separated integers denoting the respective $latitude$, $longitude$, $height$, and $\\text{points}$ for a city.\n\nConstraints\n\n$1\\leq n\\leq2\\times10^5$ \n$1\\leq d\\_\\textit{lat},d\\textit{long}\\leq2\\times10^5$ \n$1\\leq latitude,longitude,height\\leq2\\times10^5$ \n$-2\\times10^5\\leq\\textit{points}\\leq2\\times10^5$\n\nOutput Format\n\nPrint a single integer denoting the maximum possible $\\text{points}$ that Xander can earn on a mission.\n\nSample Input 0\n3 1 1\n1 1 1 3\n2 2 2 -1\n3 3 3 3\n\nSample Output 0\n5\n\nExplanation 0\n\nXander can start at city $1$, then go to city $2$, and then go to city $3$ for a maximum value of total $points=3+-1+3=5$ \n\nNote that he cannot go directly from city $1$ to city $3$ as that would violate his rules that the absolute difference in $latitude$ between adjacent visited cities be $\\leq d\\text{_lat}$ and the absolute difference in $longitude$ between adjacent visited cities be $\\leq d\\text{_long}$. Because $d\\textit{_lat}=1$ and $d\\textit{_long}=1$, he cannot directly travel between those cities.\n\nWrite Python code to solve the problem. Present the code in \n```python\nYour code\n```\nat the end.', 'role': 'user'}],
dtype=object),
'ability': 'code',
'reward_model': {'ground_truth': '{"inputs": ["3 2 2\\n1 1 1 3\\n2 2 2 -1\\n3 3 3 3\\n", "4 2 2\\n1 1 1 3\\n2 2 2 -1\\n3 3 3 3\\n4 4 4 5\\n", "5 2 2\\n1 1 1 3\\n2 2 2 -1\\n3 3 3 3\\n4 4 4 5\\n5 5 5 1\\n", "2 1 1\\n1 1 1 3\\n2 2 2 5\\n", "3 1 1\\n1 1 1 3\\n1 2 2 5\\n1 3 3 6\\n", "5 200000 200000\\n1 1 1 200000\\n200000 200000 200000 200000\\n400000 400000 400000 200000\\n600000 600000 600000 200000\\n800000 800000 800000 200000\\n"], "outputs": ["6", "11", "12", "8", "14", "1000000"]}', 'style': 'rule'},
'extra_info': {'index': 0, 'split': 'dummy'}
}
```
Detailed descriptions of the different fields can be found [here](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html).
## Detailed RL Data Preprocessing
### Data Filtering and Question-Type Classification
The preprocessing pipeline employs a systematic rule-based approach to filter and classify mathematical problems to create a high-quality dataset with solvable problems, appropriate difficulty levels, and correct solutions.
We exclude problems containing figures or diagrams since they require visual processing capabilities. We also remove proof questions due to difficulties in answer verification. The remaining problems are classified into question-answering, multiple-choice, or fill-in-the-blank questions based on specific patterns. Since fill-in-the-blank questions comprise less than 400 examples compared to the much larger set of multiple-choice questions, we focus solely on multiple-choice questions for further processing.
### Converting to Direct Question-Answer Format
We transform multiple-choice questions into a direct question-answer format through three sequential stages: rule-based filtering, LLM-based filtering, and LLM-based formatting.
We first identify and remove questions that inherently require multiple-choice options - specifically, those where comparing specific statements or properties is essential to the problem-solving process. These questions cannot be meaningfully converted to a direct question-answer format. The initial filtering employs simple rule-based pattern matching, searching for keywords like "following" and "statement" that typically indicate option-dependent problems.
Following the rule-based filtering, we employ Meta-Llama-3.1-8B-Instruct to perform a more nuanced classification of the remaining questions. Our pilot study revealed that while the LLM occasionally misclassifies questions, it tends to err on the conservative side - marking potentially convertible questions as requiring options rather than the reverse. Given our large dataset, we accepted this conservative approach to maintain quality.
For questions classified as convertible, we implement a two-phase reformatting process:
1. Question Reformatting: Removing choice indicators and restructuring the question to elicit direct answers
2. Solution Reformatting: Converting multiple-choice solutions into step-by-step derivations, ensuring all final answers are presented in standard LaTeX boxed format
This systematic approach maintains mathematical rigor while creating a standardized format suitable for downstream applications.
### Problem and Solution Validation
The final stage involves merging all question-answer pairs and performing LLM-based comprehensive validation. We identify two key aspects in validation: solvability and correctness.
We leverage state-of-the-art mathematical reasoning models, including QwQ-32B-Preview and Qwen2.5-Math-72B-Instruct, employing a self-consistency approach to determine problem solvability, and if solvable, verify the correctness of solutions provided in the original dataset.
To enhance validation accuracy, we first analyzed sample problems to identify characteristics of solvable and unsolvable cases and created synthetic unsolvable problems featuring missing conditions or logical contradictions. Based on these samples, we developed specialized prompts to improve the models' ability to distinguish solvability.
Each problem undergoes five independent validation attempts, where the LLM:
1. Provides step-by-step solutions using LaTeX formatting
2. Identifies insolvability due to missing conditions or logical contradictions
3. Generates complete reasoning traces for solvable problems
4. Presents final answers in standardized LaTeX boxed format (`\\boxed{}`)
5. Documents any impediments to solution completion
We evaluate two key consistency measures across multiple validation attempts:
- Status Consistency: Agreement on problem solvability
- Answer Consistency:
- Consistency of solutions across different attempts
- Agreement between generated solutions and ground truth
The final dataset retains only problems that demonstrate:
- Consistent solvability across validation attempts
- Agreement in solutions across multiple attempts
- Alignment with ground truth answers
This rigorous validation process ensures the resulting dataset comprises well-defined, solvable problems with verified, accurate solutions.
## Citation
```latex
@article{cui2025process,
title={Process reinforcement through implicit rewards},
author={Cui, Ganqu and Yuan, Lifan and Wang, Zefan and Wang, Hanbin and Li, Wendi and He, Bingxiang and Fan, Yuchen and Yu, Tianyu and Xu, Qixin and Chen, Weize and others},
journal={arXiv preprint arXiv:2502.01456},
year={2025}
}
```
```latex
@article{yuan2024implicitprm,
title={Free Process Rewards without Process Labels},
author={Lifan Yuan and Wendi Li and Huayu Chen and Ganqu Cui and Ning Ding and Kaiyan Zhang and Bowen Zhou and Zhiyuan Liu and Hao Peng},
journal={arXiv preprint arXiv:2412.01981},
year={2024}
}
``` |
ioclab/laplacian_image_aesthetic_3M | ioclab | "2023-04-21T22:30:16Z" | 10,824 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-04-21T15:35:24Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 359597047282.0
num_examples: 3000000
download_size: 359170663793
dataset_size: 359597047282.0
---
# Dataset Card for "laplacian_image_aesthetic_3M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yongchao98/SymBench | yongchao98 | "2025-02-12T15:24:08Z" | 10,811 | 3 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2502.04350",
"arxiv:2410.03524",
"region:us",
"symbolic-reasoning",
"code"
] | [
"text-generation"
] | "2025-02-08T14:10:41Z" | ---
license: apache-2.0
task_categories:
- text-generation
tags:
- symbolic-reasoning
- code
language:
- en
---
# CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
<img src="./Figures/Tag.png" width="650px" alt="s" />
SymBench comprises 37 symbolic tasks related to the following papers. The specific description of each task is in page 16-19 of the paper'CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance'. This dataset comprises the dataset for finetuning CodeSteerLLM with SFT and DPO datasets, the SymBench with 37 tested tasks, the code scripts to synthesize the SymBench samples.
- [CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance](https://arxiv.org/pdf/2502.04350)
- [Steering Large Language Models between Code Execution and Textual Reasoning (ICLR'2025)](https://arxiv.org/pdf/2410.03524)
[Code](https://github.com/yongchao98/CodeSteer-v1.0)   
[Huggingface🤗](https://huggingface.co/yongchao98/CodeSteer-v1)   
[Model Weights](https://drive.google.com/drive/folders/1qb_rec6f8rMYtFKm0eQpad0L0uHCwgpL?usp=share_link)
[Finetune Datasets](https://drive.google.com/drive/folders/1Byn-99gFd5ckRkPMJ8-zagzW7XDfO8ie?usp=share_link)   
[SymBench Datasets](https://github.com/yongchao98/CodeSteer-v1.0/tree/main/dataset_gather)   
[SymBench Synthesis Scripts](https://github.com/yongchao98/CodeSteer-v1.0/tree/main/benchmark)
## Contents
- [Framework](#Framework)
- [Inspirations](#Inspirations)
- [Performance](#Performance)
- [Environment_Setup](#Environment_Setup)
- [LLM_API_Key_Setup](#LLM_API_Key_Setup)
- [Train_and_Test_Models](#Train_and_Test_Models)
- [Assistance](#Assistance)
- [Citation](#Citation)
## Framework
<img src="./Figures/CodeSteer-intro.png" width="800px" alt="s" />
<p align="center" style="font-size: 16px;">
Figure: CodeSteer on guiding LLM code/text generation to integrate symbolic computing. At each interaction with TaskLLM, it reviews current and previous answers, then provides guidance for the next round.
</p>
## Inspirations
<img src="./Figures/LLM-makes-simple-mistakes-gather.png" width="800px" alt="s" />
<p align="center" style="font-size: 16px;">
Figure: The cases that GPT-4o makes simple mistakes by direct textual reasoning but can reliably solve the problem with prompted to use code.
</p>
## Performance
We compare GPT-4o + CodeSteer with OpenAI o1 and DeepSeek R1 on SymBench, with 28 seen tasks and 9 unseen tasks. GPT-4o + CodeSteer surpasses o1 (82.7), R1 (76.8), and o1-preview (74.8), highlighting the importance of integrating symbolic computing into LLMs.
<img src="./Figures/Table-results.png" width="800px" alt="s" />
The cost of tokens and runtimes for each method are as follows. GPT-4o + CodeSteer costs less tokens and runtimes than o1 and R1.
<img src="./Figures/Cost-token-runtime.png" width="800px" alt="s" />
## Environment_Setup
The fine-tuning and inference of CodeSteerLLM are based on [Llama-factory](https://github.com/hiyouga/LLaMA-Factory) with some modules modified by us.
```
git clone https://github.com/yongchao98/CodeSteer-v1.0.git
cd CodeSteer-v1.0
conda create -n CodeSteer python=3.10
conda activate CodeSteer
pip install -r requirements.txt
```
## LLM_API_Key_Setup
If you want to use several API-based LLMs as TaskLLM or CodeSteerLLM, then you need to set up API key.
1. First, create a .env file in your project root:
```
OPENAI_API_KEY='your_key_here'
CLAUDE_API_KEY='your_key_here'
MIXTRAL_API_KEY='your_key_here'
DEEPSEEK_API_KEY='your_key_here'
```
2. Add this .env file to your .gitignore to prevent accidentally committing it:
```
echo ".env" >> .gitignore
```
## Train_and_Test_Models
### Create_test_samples
The synthesized test samples for 37 tasks of SymBench are in [dataset_gather](https://github.com/yongchao98/CodeSteer-v1.0/tree/main/dataset_gather) dictionary. You can also synthezise the samples by yourself with tunable complexities with scripts in [create_dataset](https://github.com/yongchao98/CodeSteer-v1.0/tree/main/create_dataset).
### Run inference without GPU, test close LLM as CodeSteerLLM
We can directly use unfinetuned model like GPT-4o as CodeSteerLLM, in this case directly run
```
python benchmark_test_baseline.py
```
### Run inference with GPU, test finetuned CodeSteerLLM
We can infer Llama-3.1-8B with own GPUs (default setting is in infer_CodeSteer.sh using 4*H100 of Harvard Cluster, please modify freely with your own cluster settings). You can also download the [Model Weights](https://drive.google.com/drive/folders/1qb_rec6f8rMYtFKm0eQpad0L0uHCwgpL?usp=share_link) in your local and change the path in llama3_8B_CodeSteer.yaml.
```bash
bash infer_CodeSteer.sh
# default config file is ./llama3_8B_CodeSteer.yaml using the model uploaded on Huggingface.
```
### Finetuning CodeSteerLLM with synthesized data
Both our synthesized datasets of SFT and DPO finetuning are in [Finetune Datasets](https://drive.google.com/drive/folders/1Byn-99gFd5ckRkPMJ8-zagzW7XDfO8ie?usp=share_link).
We use Llama-factory and DeepSpeed for fintuning processes. First install Llama-factory with:
```
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e ".[torch,metrics]"
cd ..
```
Then we run the code with (default setting is in train_llama3-8B-CodeSteer.sh using 4*H100 of Harvard Cluster, please modify freely with your own cluster settings):
```
bash train_llama3-8B-CodeSteer.sh
```
## Assistance
We appreciate all feedback! Feel free to raise an issue for bugs, questions, or suggestions. Contacting [Yongchao Chen](https://yongchao98.github.io/YongchaoChen/) and [Chuchu Fan](https://chuchu.mit.edu) for any questions and discussion.
## Citation
```md
@misc{chen2025codesteersymbolicaugmentedlanguagemodels,
title={CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance},
author={Yongchao Chen and Yilun Hao and Yueying Liu and Yang Zhang and Chuchu Fan},
year={2025},
eprint={2502.04350},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.04350},
}
```
```md
@article{chen2024steering,
title={Steering Large Language Models between Code Execution and Textual Reasoning},
author={Chen, Yongchao and Jhamtani, Harsh and Sharma, Srinagesh and Fan, Chuchu and Wang, Chi},
journal={arXiv preprint arXiv:2410.03524},
year={2024}
}
``` |
gair-prox/DCLM-pro | gair-prox | "2025-02-15T11:41:05Z" | 10,806 | 8 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2409.17115",
"region:us",
"web",
"common crawl"
] | [
"text-generation"
] | "2025-02-14T09:30:19Z" | ---
license: odc-by
task_categories:
- text-generation
language:
- en
tags:
- web
- common crawl
size_categories:
- 100B<n<1T
---
# 📚 DCLM-pro
<p align="center">
<img src="prox-teaser.png">
</p>
[ArXiv](http://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/collections/gair-prox/prox-general-models-65f1674f0607712c4d6eec76) | [Code](https://github.com/GAIR-NLP/ProX)
DCLM-pro is refined from [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet) using the **ProX** refining framework.
It contains about >500B high quality tokens, ready for general language model pre-training.
## License
DCLM-pro is based on DCLM, which is made available under an cc-by-4.0 license.
### Citation
```
@article{zhou2024programming,
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
journal={arXiv preprint arXiv:2409.17115},
year={2024}
}
``` |
nguha/legalbench | nguha | "2024-09-30T04:35:09Z" | 10,777 | 105 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"arxiv:2308.11462",
"arxiv:2110.01799",
"arxiv:2103.06268",
"arxiv:2301.00876",
"arxiv:1911.00841",
"arxiv:2105.07903",
"region:us",
"legal",
"law",
"finance"
] | [
"text-classification",
"question-answering",
"text-generation"
] | "2023-03-16T23:03:42Z" | ---
language:
- en
license: other
size_categories:
- 10K<n<100K
task_categories:
- text-classification
- question-answering
- text-generation
tags:
- legal
- law
- finance
dataset_info:
- config_name: abercrombie
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 307
num_examples: 5
- name: test
num_bytes: 6240
num_examples: 95
download_size: 19558988
dataset_size: 6547
- config_name: canada_tax_court_outcomes
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2975
num_examples: 6
- name: test
num_bytes: 157411
num_examples: 244
download_size: 19558988
dataset_size: 160386
- config_name: citation_prediction_classification
features:
- name: answer
dtype: string
- name: citation
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 660
num_examples: 2
- name: test
num_bytes: 26112
num_examples: 108
download_size: 19558988
dataset_size: 26772
- config_name: citation_prediction_open
features:
- name: answer
dtype: string
- name: circuit
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 555
num_examples: 2
- name: test
num_bytes: 13460
num_examples: 53
download_size: 19558988
dataset_size: 14015
- config_name: consumer_contracts_qa
features:
- name: answer
dtype: string
- name: contract
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 9941
num_examples: 4
- name: test
num_bytes: 1221320
num_examples: 396
download_size: 19558988
dataset_size: 1231261
- config_name: contract_nli_confidentiality_of_agreement
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4070
num_examples: 8
- name: test
num_bytes: 43818
num_examples: 82
download_size: 19558988
dataset_size: 47888
- config_name: contract_nli_explicit_identification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3615
num_examples: 8
- name: test
num_bytes: 62133
num_examples: 109
download_size: 19558988
dataset_size: 65748
- config_name: contract_nli_inclusion_of_verbally_conveyed_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3817
num_examples: 8
- name: test
num_bytes: 81933
num_examples: 139
download_size: 19558988
dataset_size: 85750
- config_name: contract_nli_limited_use
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4855
num_examples: 8
- name: test
num_bytes: 98534
num_examples: 208
download_size: 19558988
dataset_size: 103389
- config_name: contract_nli_no_licensing
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2591
num_examples: 8
- name: test
num_bytes: 78173
num_examples: 162
download_size: 19558988
dataset_size: 80764
- config_name: contract_nli_notice_on_compelled_disclosure
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3907
num_examples: 8
- name: test
num_bytes: 80470
num_examples: 142
download_size: 19558988
dataset_size: 84377
- config_name: contract_nli_permissible_acquirement_of_similar_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2736
num_examples: 8
- name: test
num_bytes: 87469
num_examples: 178
download_size: 19558988
dataset_size: 90205
- config_name: contract_nli_permissible_copy
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3480
num_examples: 8
- name: test
num_bytes: 39015
num_examples: 87
download_size: 19558988
dataset_size: 42495
- config_name: contract_nli_permissible_development_of_similar_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3921
num_examples: 8
- name: test
num_bytes: 62603
num_examples: 136
download_size: 19558988
dataset_size: 66524
- config_name: contract_nli_permissible_post-agreement_possession
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4608
num_examples: 8
- name: test
num_bytes: 65932
num_examples: 111
download_size: 19558988
dataset_size: 70540
- config_name: contract_nli_return_of_confidential_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3499
num_examples: 8
- name: test
num_bytes: 35672
num_examples: 66
download_size: 19558988
dataset_size: 39171
- config_name: contract_nli_sharing_with_employees
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3173
num_examples: 8
- name: test
num_bytes: 104240
num_examples: 170
download_size: 19558988
dataset_size: 107413
- config_name: contract_nli_sharing_with_third-parties
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3249
num_examples: 8
- name: test
num_bytes: 104822
num_examples: 180
download_size: 19558988
dataset_size: 108071
- config_name: contract_nli_survival_of_obligations
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2272
num_examples: 8
- name: test
num_bytes: 75450
num_examples: 157
download_size: 19558988
dataset_size: 77722
- config_name: contract_qa
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2408
num_examples: 8
- name: test
num_bytes: 26370
num_examples: 80
download_size: 19558988
dataset_size: 28778
- config_name: corporate_lobbying
features:
- name: answer
dtype: string
- name: bill_summary
dtype: string
- name: bill_title
dtype: string
- name: company_description
dtype: string
- name: company_name
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 54334
num_examples: 10
- name: test
num_bytes: 2974813
num_examples: 490
download_size: 19558988
dataset_size: 3029147
- config_name: cuad_affiliate_license-licensee
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4067
num_examples: 6
- name: test
num_bytes: 115798
num_examples: 198
download_size: 19558988
dataset_size: 119865
- config_name: cuad_affiliate_license-licensor
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4247
num_examples: 6
- name: test
num_bytes: 64931
num_examples: 88
download_size: 19558988
dataset_size: 69178
- config_name: cuad_anti-assignment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2070
num_examples: 6
- name: test
num_bytes: 513026
num_examples: 1172
download_size: 19558988
dataset_size: 515096
- config_name: cuad_audit_rights
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2555
num_examples: 6
- name: test
num_bytes: 526977
num_examples: 1216
download_size: 19558988
dataset_size: 529532
- config_name: cuad_cap_on_liability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2621
num_examples: 6
- name: test
num_bytes: 587220
num_examples: 1246
download_size: 19558988
dataset_size: 589841
- config_name: cuad_change_of_control
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2231
num_examples: 6
- name: test
num_bytes: 203823
num_examples: 416
download_size: 19558988
dataset_size: 206054
- config_name: cuad_competitive_restriction_exception
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2774
num_examples: 6
- name: test
num_bytes: 115844
num_examples: 220
download_size: 19558988
dataset_size: 118618
- config_name: cuad_covenant_not_to_sue
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2581
num_examples: 6
- name: test
num_bytes: 153799
num_examples: 308
download_size: 19558988
dataset_size: 156380
- config_name: cuad_effective_date
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2080
num_examples: 6
- name: test
num_bytes: 87802
num_examples: 236
download_size: 19558988
dataset_size: 89882
- config_name: cuad_exclusivity
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1897
num_examples: 6
- name: test
num_bytes: 355097
num_examples: 762
download_size: 19558988
dataset_size: 356994
- config_name: cuad_expiration_date
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1638
num_examples: 6
- name: test
num_bytes: 354232
num_examples: 876
download_size: 19558988
dataset_size: 355870
- config_name: cuad_governing_law
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2420
num_examples: 6
- name: test
num_bytes: 337322
num_examples: 876
download_size: 19558988
dataset_size: 339742
- config_name: cuad_insurance
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2537
num_examples: 6
- name: test
num_bytes: 475827
num_examples: 1030
download_size: 19558988
dataset_size: 478364
- config_name: cuad_ip_ownership_assignment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4756
num_examples: 6
- name: test
num_bytes: 294749
num_examples: 576
download_size: 19558988
dataset_size: 299505
- config_name: cuad_irrevocable_or_perpetual_license
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 5328
num_examples: 6
- name: test
num_bytes: 160279
num_examples: 280
download_size: 19558988
dataset_size: 165607
- config_name: cuad_joint_ip_ownership
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 5011
num_examples: 6
- name: test
num_bytes: 90592
num_examples: 192
download_size: 19558988
dataset_size: 95603
- config_name: cuad_license_grant
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3690
num_examples: 6
- name: test
num_bytes: 709331
num_examples: 1396
download_size: 19558988
dataset_size: 713021
- config_name: cuad_liquidated_damages
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3579
num_examples: 6
- name: test
num_bytes: 97839
num_examples: 220
download_size: 19558988
dataset_size: 101418
- config_name: cuad_minimum_commitment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2874
num_examples: 6
- name: test
num_bytes: 354078
num_examples: 772
download_size: 19558988
dataset_size: 356952
- config_name: cuad_most_favored_nation
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2103
num_examples: 6
- name: test
num_bytes: 32800
num_examples: 64
download_size: 19558988
dataset_size: 34903
- config_name: cuad_no-solicit_of_customers
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3310
num_examples: 6
- name: test
num_bytes: 40828
num_examples: 84
download_size: 19558988
dataset_size: 44138
- config_name: cuad_no-solicit_of_employees
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3619
num_examples: 6
- name: test
num_bytes: 72661
num_examples: 142
download_size: 19558988
dataset_size: 76280
- config_name: cuad_non-compete
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3675
num_examples: 6
- name: test
num_bytes: 211272
num_examples: 442
download_size: 19558988
dataset_size: 214947
- config_name: cuad_non-disparagement
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2168
num_examples: 6
- name: test
num_bytes: 49850
num_examples: 100
download_size: 19558988
dataset_size: 52018
- config_name: cuad_non-transferable_license
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3643
num_examples: 6
- name: test
num_bytes: 269505
num_examples: 542
download_size: 19558988
dataset_size: 273148
- config_name: cuad_notice_period_to_terminate_renewal
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4166
num_examples: 6
- name: test
num_bytes: 100014
num_examples: 222
download_size: 19558988
dataset_size: 104180
- config_name: cuad_post-termination_services
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3349
num_examples: 6
- name: test
num_bytes: 419477
num_examples: 808
download_size: 19558988
dataset_size: 422826
- config_name: cuad_price_restrictions
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2945
num_examples: 6
- name: test
num_bytes: 19430
num_examples: 46
download_size: 19558988
dataset_size: 22375
- config_name: cuad_renewal_term
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2163
num_examples: 6
- name: test
num_bytes: 168528
num_examples: 386
download_size: 19558988
dataset_size: 170691
- config_name: cuad_revenue-profit_sharing
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2581
num_examples: 6
- name: test
num_bytes: 363594
num_examples: 774
download_size: 19558988
dataset_size: 366175
- config_name: cuad_rofr-rofo-rofn
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2817
num_examples: 6
- name: test
num_bytes: 338243
num_examples: 690
download_size: 19558988
dataset_size: 341060
- config_name: cuad_source_code_escrow
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2696
num_examples: 6
- name: test
num_bytes: 58125
num_examples: 118
download_size: 19558988
dataset_size: 60821
- config_name: cuad_termination_for_convenience
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1506
num_examples: 6
- name: test
num_bytes: 181164
num_examples: 430
download_size: 19558988
dataset_size: 182670
- config_name: cuad_third_party_beneficiary
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2378
num_examples: 6
- name: test
num_bytes: 24106
num_examples: 68
download_size: 19558988
dataset_size: 26484
- config_name: cuad_uncapped_liability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2570
num_examples: 6
- name: test
num_bytes: 158009
num_examples: 294
download_size: 19558988
dataset_size: 160579
- config_name: cuad_unlimited-all-you-can-eat-license
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2414
num_examples: 6
- name: test
num_bytes: 22347
num_examples: 48
download_size: 19558988
dataset_size: 24761
- config_name: cuad_volume_restriction
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1397
num_examples: 6
- name: test
num_bytes: 129456
num_examples: 322
download_size: 19558988
dataset_size: 130853
- config_name: cuad_warranty_duration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1815
num_examples: 6
- name: test
num_bytes: 142580
num_examples: 320
download_size: 19558988
dataset_size: 144395
- config_name: definition_classification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1826
num_examples: 8
- name: test
num_bytes: 371743
num_examples: 1337
download_size: 19558988
dataset_size: 373569
- config_name: definition_extraction
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2731
num_examples: 8
- name: test
num_bytes: 254689
num_examples: 687
download_size: 19558988
dataset_size: 257420
- config_name: diversity_1
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 803
num_examples: 6
- name: test
num_bytes: 41135
num_examples: 300
download_size: 19558988
dataset_size: 41938
- config_name: diversity_2
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1041
num_examples: 6
- name: test
num_bytes: 53537
num_examples: 300
download_size: 19558988
dataset_size: 54578
- config_name: diversity_3
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 992
num_examples: 6
- name: test
num_bytes: 50744
num_examples: 300
download_size: 19558988
dataset_size: 51736
- config_name: diversity_4
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1070
num_examples: 6
- name: test
num_bytes: 53464
num_examples: 300
download_size: 19558988
dataset_size: 54534
- config_name: diversity_5
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1232
num_examples: 6
- name: test
num_bytes: 62550
num_examples: 300
download_size: 19558988
dataset_size: 63782
- config_name: diversity_6
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2016
num_examples: 6
- name: test
num_bytes: 100411
num_examples: 300
download_size: 19558988
dataset_size: 102427
- config_name: function_of_decision_section
features:
- name: Citation
dtype: string
- name: Paragraph
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 1547
num_examples: 7
- name: test
num_bytes: 210419
num_examples: 367
download_size: 19558988
dataset_size: 211966
- config_name: hearsay
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: slice
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 788
num_examples: 5
- name: test
num_bytes: 17150
num_examples: 94
download_size: 19558988
dataset_size: 17938
- config_name: insurance_policy_interpretation
features:
- name: answer
dtype: string
- name: claim
dtype: string
- name: index
dtype: string
- name: policy
dtype: string
splits:
- name: train
num_bytes: 3119
num_examples: 5
- name: test
num_bytes: 70764
num_examples: 133
download_size: 19558988
dataset_size: 73883
- config_name: international_citizenship_questions
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 832
num_examples: 4
- name: test
num_bytes: 2089107
num_examples: 9306
download_size: 19558988
dataset_size: 2089939
- config_name: jcrew_blocker
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7352
num_examples: 6
- name: test
num_bytes: 59879
num_examples: 54
download_size: 19558988
dataset_size: 67231
- config_name: learned_hands_benefits
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8267
num_examples: 6
- name: test
num_bytes: 87512
num_examples: 66
download_size: 19558988
dataset_size: 95779
- config_name: learned_hands_business
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6075
num_examples: 6
- name: test
num_bytes: 202116
num_examples: 174
download_size: 19558988
dataset_size: 208191
- config_name: learned_hands_consumer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6355
num_examples: 6
- name: test
num_bytes: 795463
num_examples: 614
download_size: 19558988
dataset_size: 801818
- config_name: learned_hands_courts
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10693
num_examples: 6
- name: test
num_bytes: 228204
num_examples: 192
download_size: 19558988
dataset_size: 238897
- config_name: learned_hands_crime
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7322
num_examples: 6
- name: test
num_bytes: 846597
num_examples: 688
download_size: 19558988
dataset_size: 853919
- config_name: learned_hands_divorce
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10651
num_examples: 6
- name: test
num_bytes: 189279
num_examples: 150
download_size: 19558988
dataset_size: 199930
- config_name: learned_hands_domestic_violence
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11170
num_examples: 6
- name: test
num_bytes: 239797
num_examples: 174
download_size: 19558988
dataset_size: 250967
- config_name: learned_hands_education
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6992
num_examples: 6
- name: test
num_bytes: 79184
num_examples: 56
download_size: 19558988
dataset_size: 86176
- config_name: learned_hands_employment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11223
num_examples: 6
- name: test
num_bytes: 909220
num_examples: 710
download_size: 19558988
dataset_size: 920443
- config_name: learned_hands_estates
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5970
num_examples: 6
- name: test
num_bytes: 216836
num_examples: 178
download_size: 19558988
dataset_size: 222806
- config_name: learned_hands_family
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8714
num_examples: 6
- name: test
num_bytes: 3073508
num_examples: 2265
download_size: 19558988
dataset_size: 3082222
- config_name: learned_hands_health
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6155
num_examples: 6
- name: test
num_bytes: 336934
num_examples: 226
download_size: 19558988
dataset_size: 343089
- config_name: learned_hands_housing
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9726
num_examples: 6
- name: test
num_bytes: 6028612
num_examples: 4494
download_size: 19558988
dataset_size: 6038338
- config_name: learned_hands_immigration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3955
num_examples: 6
- name: test
num_bytes: 165352
num_examples: 134
download_size: 19558988
dataset_size: 169307
- config_name: learned_hands_torts
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4484
num_examples: 6
- name: test
num_bytes: 615649
num_examples: 432
download_size: 19558988
dataset_size: 620133
- config_name: learned_hands_traffic
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6250
num_examples: 6
- name: test
num_bytes: 667539
num_examples: 556
download_size: 19558988
dataset_size: 673789
- config_name: legal_reasoning_causality
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4688
num_examples: 4
- name: test
num_bytes: 87007
num_examples: 55
download_size: 19558988
dataset_size: 91695
- config_name: maud_ability_to_consummate_concept_is_subject_to_mae_carveouts
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5322
num_examples: 1
- name: test
num_bytes: 304051
num_examples: 69
download_size: 19558988
dataset_size: 309373
- config_name: maud_accuracy_of_fundamental_target_rws_bringdown_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 271
num_examples: 1
- name: test
num_bytes: 148869
num_examples: 175
download_size: 19558988
dataset_size: 149140
- config_name: maud_accuracy_of_target_capitalization_rw_(outstanding_shares)_bringdown_standard_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1493
num_examples: 1
- name: test
num_bytes: 152224
num_examples: 181
download_size: 19558988
dataset_size: 153717
- config_name: maud_accuracy_of_target_general_rw_bringdown_timing_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1000
num_examples: 1
- name: test
num_bytes: 152717
num_examples: 181
download_size: 19558988
dataset_size: 153717
- config_name: maud_additional_matching_rights_period_for_modifications_(cor)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2170
num_examples: 1
- name: test
num_bytes: 312632
num_examples: 158
download_size: 19558988
dataset_size: 314802
- config_name: maud_application_of_buyer_consent_requirement_(negative_interim_covenant)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 558
num_examples: 1
- name: test
num_bytes: 96990
num_examples: 180
download_size: 19558988
dataset_size: 97548
- config_name: maud_buyer_consent_requirement_(ordinary_course)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2620
num_examples: 1
- name: test
num_bytes: 138668
num_examples: 181
download_size: 19558988
dataset_size: 141288
- config_name: maud_change_in_law__subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6000
num_examples: 1
- name: test
num_bytes: 448666
num_examples: 99
download_size: 19558988
dataset_size: 454666
- config_name: maud_changes_in_gaap_or_other_accounting_principles__subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5998
num_examples: 1
- name: test
num_bytes: 444442
num_examples: 98
download_size: 19558988
dataset_size: 450440
- config_name: maud_cor_permitted_in_response_to_intervening_event
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2631
num_examples: 1
- name: test
num_bytes: 195447
num_examples: 100
download_size: 19558988
dataset_size: 198078
- config_name: maud_cor_permitted_with_board_fiduciary_determination_only
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3970
num_examples: 1
- name: test
num_bytes: 194108
num_examples: 100
download_size: 19558988
dataset_size: 198078
- config_name: maud_cor_standard_(intervening_event)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 727
num_examples: 1
- name: test
num_bytes: 175140
num_examples: 84
download_size: 19558988
dataset_size: 175867
- config_name: maud_cor_standard_(superior_offer)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1173
num_examples: 1
- name: test
num_bytes: 196905
num_examples: 100
download_size: 19558988
dataset_size: 198078
- config_name: maud_definition_contains_knowledge_requirement_-_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1899
num_examples: 1
- name: test
num_bytes: 231405
num_examples: 147
download_size: 19558988
dataset_size: 233304
- config_name: maud_definition_includes_asset_deals
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 614
num_examples: 1
- name: test
num_bytes: 289644
num_examples: 146
download_size: 19558988
dataset_size: 290258
- config_name: maud_definition_includes_stock_deals
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 683
num_examples: 1
- name: test
num_bytes: 292466
num_examples: 148
download_size: 19558988
dataset_size: 293149
- config_name: maud_fiduciary_exception__board_determination_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1594
num_examples: 1
- name: test
num_bytes: 288180
num_examples: 179
download_size: 19558988
dataset_size: 289774
- config_name: maud_fiduciary_exception_board_determination_trigger_(no_shop)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3538
num_examples: 1
- name: test
num_bytes: 286236
num_examples: 179
download_size: 19558988
dataset_size: 289774
- config_name: maud_financial_point_of_view_is_the_sole_consideration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3290
num_examples: 1
- name: test
num_bytes: 217048
num_examples: 112
download_size: 19558988
dataset_size: 220338
- config_name: maud_fls_(mae)_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4669
num_examples: 1
- name: test
num_bytes: 349856
num_examples: 77
download_size: 19558988
dataset_size: 354525
- config_name: maud_general_economic_and_financial_conditions_subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5998
num_examples: 1
- name: test
num_bytes: 445306
num_examples: 98
download_size: 19558988
dataset_size: 451304
- config_name: maud_includes_consistent_with_past_practice
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1127
num_examples: 1
- name: test
num_bytes: 140161
num_examples: 181
download_size: 19558988
dataset_size: 141288
- config_name: maud_initial_matching_rights_period_(cor)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3041
num_examples: 1
- name: test
num_bytes: 311761
num_examples: 158
download_size: 19558988
dataset_size: 314802
- config_name: maud_initial_matching_rights_period_(ftr)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1850
num_examples: 1
- name: test
num_bytes: 279202
num_examples: 132
download_size: 19558988
dataset_size: 281052
- config_name: maud_intervening_event_-_required_to_occur_after_signing_-_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3055
num_examples: 1
- name: test
num_bytes: 230249
num_examples: 147
download_size: 19558988
dataset_size: 233304
- config_name: maud_knowledge_definition
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 240
num_examples: 1
- name: test
num_bytes: 359730
num_examples: 167
download_size: 19558988
dataset_size: 359970
- config_name: maud_liability_standard_for_no-shop_breach_by_target_non-do_representatives
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 154
num_examples: 1
- name: test
num_bytes: 40946
num_examples: 156
download_size: 19558988
dataset_size: 41100
- config_name: maud_ordinary_course_efforts_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1037
num_examples: 1
- name: test
num_bytes: 140251
num_examples: 181
download_size: 19558988
dataset_size: 141288
- config_name: maud_pandemic_or_other_public_health_event__subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3728
num_examples: 1
- name: test
num_bytes: 447053
num_examples: 98
download_size: 19558988
dataset_size: 450781
- config_name: maud_pandemic_or_other_public_health_event_specific_reference_to_pandemic-related_governmental_responses_or_measures
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3728
num_examples: 1
- name: test
num_bytes: 447053
num_examples: 98
download_size: 19558988
dataset_size: 450781
- config_name: maud_relational_language_(mae)_applies_to
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4948
num_examples: 1
- name: test
num_bytes: 409477
num_examples: 90
download_size: 19558988
dataset_size: 414425
- config_name: maud_specific_performance
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 771
num_examples: 1
- name: test
num_bytes: 107392
num_examples: 178
download_size: 19558988
dataset_size: 108163
- config_name: maud_tail_period_length
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 406
num_examples: 1
- name: test
num_bytes: 108632
num_examples: 179
download_size: 19558988
dataset_size: 109038
- config_name: maud_type_of_consideration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 258
num_examples: 1
- name: test
num_bytes: 139270
num_examples: 172
download_size: 19558988
dataset_size: 139528
- config_name: nys_judicial_ethics
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: year
dtype: string
splits:
- name: train
num_bytes: 1697
num_examples: 8
- name: test
num_bytes: 53974
num_examples: 292
download_size: 19558988
dataset_size: 55671
- config_name: opp115_data_retention
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1791
num_examples: 8
- name: test
num_bytes: 18620
num_examples: 88
download_size: 19558988
dataset_size: 20411
- config_name: opp115_data_security
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2123
num_examples: 8
- name: test
num_bytes: 352667
num_examples: 1334
download_size: 19558988
dataset_size: 354790
- config_name: opp115_do_not_track
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2507
num_examples: 8
- name: test
num_bytes: 26363
num_examples: 110
download_size: 19558988
dataset_size: 28870
- config_name: opp115_first_party_collection_use
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2227
num_examples: 8
- name: test
num_bytes: 463566
num_examples: 2086
download_size: 19558988
dataset_size: 465793
- config_name: opp115_international_and_specific_audiences
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1643
num_examples: 8
- name: test
num_bytes: 338196
num_examples: 980
download_size: 19558988
dataset_size: 339839
- config_name: opp115_policy_change
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1201
num_examples: 8
- name: test
num_bytes: 94060
num_examples: 431
download_size: 19558988
dataset_size: 95261
- config_name: opp115_third_party_sharing_collection
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1217
num_examples: 8
- name: test
num_bytes: 383909
num_examples: 1590
download_size: 19558988
dataset_size: 385126
- config_name: opp115_user_access,_edit_and_deletion
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1251
num_examples: 8
- name: test
num_bytes: 108969
num_examples: 462
download_size: 19558988
dataset_size: 110220
- config_name: opp115_user_choice_control
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1695
num_examples: 8
- name: test
num_bytes: 353113
num_examples: 1546
download_size: 19558988
dataset_size: 354808
- config_name: oral_argument_question_purpose
features:
- name: Docket No.
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 2415
num_examples: 7
- name: test
num_bytes: 95262
num_examples: 312
download_size: 19558988
dataset_size: 97677
- config_name: overruling
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 629
num_examples: 6
- name: test
num_bytes: 443484
num_examples: 2394
download_size: 19558988
dataset_size: 444113
- config_name: personal_jurisdiction
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: slice
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1660
num_examples: 4
- name: test
num_bytes: 21089
num_examples: 50
download_size: 19558988
dataset_size: 22749
- config_name: privacy_policy_entailment
features:
- name: answer
dtype: string
- name: description
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6282
num_examples: 8
- name: test
num_bytes: 3174950
num_examples: 4335
download_size: 19558988
dataset_size: 3181232
- config_name: privacy_policy_qa
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2231
num_examples: 8
- name: test
num_bytes: 2817986
num_examples: 10923
download_size: 19558988
dataset_size: 2820217
- config_name: proa
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1057
num_examples: 5
- name: test
num_bytes: 25475
num_examples: 95
download_size: 19558988
dataset_size: 26532
- config_name: rule_qa
features:
- name: answer
dtype: string
- name: doctrine
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 12665
num_examples: 50
download_size: 19558988
dataset_size: 12665
- config_name: sara_entailment
features:
- name: answer
dtype: string
- name: case id
dtype: string
- name: description
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: statute
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2528
num_examples: 4
- name: test
num_bytes: 225560
num_examples: 272
download_size: 19558988
dataset_size: 228088
- config_name: sara_numeric
features:
- name: answer
dtype: string
- name: case id
dtype: string
- name: description
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: statute
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 238363
num_examples: 4
- name: test
num_bytes: 5725392
num_examples: 96
download_size: 19558988
dataset_size: 5963755
- config_name: scalr
features:
- name: answer
dtype: string
- name: choice_0
dtype: string
- name: choice_1
dtype: string
- name: choice_2
dtype: string
- name: choice_3
dtype: string
- name: choice_4
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: test
num_bytes: 1026740
num_examples: 571
download_size: 19558988
dataset_size: 1026740
- config_name: ssla_company_defendants
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5847
num_examples: 3
- name: test
num_bytes: 2313039
num_examples: 1228
download_size: 19558988
dataset_size: 2318886
- config_name: ssla_individual_defendants
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5962
num_examples: 3
- name: test
num_bytes: 2002620
num_examples: 1012
download_size: 19558988
dataset_size: 2008582
- config_name: ssla_plaintiff
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5831
num_examples: 3
- name: test
num_bytes: 1926518
num_examples: 1033
download_size: 19558988
dataset_size: 1932349
- config_name: successor_liability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: issue
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1734
num_examples: 3
- name: test
num_bytes: 26490
num_examples: 47
download_size: 19558988
dataset_size: 28224
- config_name: supply_chain_disclosure_best_practice_accountability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18987
num_examples: 8
- name: test
num_bytes: 1347025
num_examples: 379
download_size: 19558988
dataset_size: 1366012
- config_name: supply_chain_disclosure_best_practice_audits
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 23879
num_examples: 8
- name: test
num_bytes: 1342065
num_examples: 379
download_size: 19558988
dataset_size: 1365944
- config_name: supply_chain_disclosure_best_practice_certification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22058
num_examples: 8
- name: test
num_bytes: 1338516
num_examples: 378
download_size: 19558988
dataset_size: 1360574
- config_name: supply_chain_disclosure_best_practice_training
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24071
num_examples: 8
- name: test
num_bytes: 1341885
num_examples: 379
download_size: 19558988
dataset_size: 1365956
- config_name: supply_chain_disclosure_best_practice_verification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27158
num_examples: 8
- name: test
num_bytes: 1338739
num_examples: 379
download_size: 19558988
dataset_size: 1365897
- config_name: supply_chain_disclosure_disclosed_accountability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18902
num_examples: 8
- name: test
num_bytes: 1344444
num_examples: 378
download_size: 19558988
dataset_size: 1363346
- config_name: supply_chain_disclosure_disclosed_audits
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24404
num_examples: 8
- name: test
num_bytes: 1341624
num_examples: 379
download_size: 19558988
dataset_size: 1366028
- config_name: supply_chain_disclosure_disclosed_certification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17987
num_examples: 8
- name: test
num_bytes: 1342646
num_examples: 378
download_size: 19558988
dataset_size: 1360633
- config_name: supply_chain_disclosure_disclosed_training
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27093
num_examples: 8
- name: test
num_bytes: 1338919
num_examples: 379
download_size: 19558988
dataset_size: 1366012
- config_name: supply_chain_disclosure_disclosed_verification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25387
num_examples: 8
- name: test
num_bytes: 1340578
num_examples: 379
download_size: 19558988
dataset_size: 1365965
- config_name: telemarketing_sales_rule
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1230
num_examples: 4
- name: test
num_bytes: 17140
num_examples: 47
download_size: 19558988
dataset_size: 18370
- config_name: textualism_tool_dictionaries
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4842
num_examples: 4
- name: test
num_bytes: 102644
num_examples: 107
download_size: 19558988
dataset_size: 107486
- config_name: textualism_tool_plain
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3338
num_examples: 4
- name: test
num_bytes: 167428
num_examples: 165
download_size: 19558988
dataset_size: 170766
- config_name: ucc_v_common_law
features:
- name: answer
dtype: string
- name: contract
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 904
num_examples: 6
- name: test
num_bytes: 12694
num_examples: 94
download_size: 19558988
dataset_size: 13598
- config_name: unfair_tos
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3308
num_examples: 9
- name: test
num_bytes: 787108
num_examples: 3813
download_size: 19558988
dataset_size: 790416
---
# Dataset Card for Dataset Name
- **Homepage: https://hazyresearch.stanford.edu/legalbench/**
- **Repository: https://github.com/HazyResearch/legalbench/**
- **Paper: https://arxiv.org/abs/2308.11462**
## Dataset Description
### Dataset Summary
The LegalBench project is an ongoing open science effort to collaboratively curate tasks for evaluating legal reasoning in English large language models (LLMs). The benchmark currently consists of 162 tasks gathered from 40 contributors.
Note: Because LegalBench is intended to test zero and few-shot reasoning, the available "train" splits are small. However, if you are interested in finetuning models or studying model performance in a more traditional train/test regime, you can combine and re-partition train and test data.
If you have questions about the project or would like to get involved, please see the website for more information.
### Supported Tasks and Leaderboards
LegalBench tasks span multiple types (binary classification, multi-class classification, extraction, generation, entailment), multiple types of text (statutes, judicial opinions, contracts, etc.), and multiple areas of law (evidence, contracts, civil procedure, etc.). For more information on tasks, we recommend visiting the website, where you can search through task descriptions, or the Github repository, which contains more granular task descriptions. We also recommend reading the paper, which provides more background on task significance and construction process.
### Languages
All LegalBench tasks are in English.
## Dataset Structure
### Data Instances
Detailed descriptions of the instances for each task can be found on the Github. An example of an instance, for the `abercrombie` task, is provided below:
```
{
"text": "The mark "Ivory" for a product made of elephant tusks.",
"label": "generic"
"idx": 0
}
```
A substantial number of LegalBench tasks are binary classification tasks, which require the LLM to determine if a piece of text has some legal attribute. Because these are framed as Yes/No questions, the label space is "Yes" or "No".
### Data Fields
Detailed descriptions of the instances for each task can be found on the Github.
### Data Splits
Each task (except for `rule_qa` and `scalr`) has both a training and evaluation split. Following [RAFT](https://huggingface.co/datasets/ought/raft), train splits only consists of a few-labeled instances, reflecting the few-shot nature of most LLMs.
## Dataset Creation
### Curation Rationale
LegalBench was created to enable researchers to better benchmark the legal reasoning capabilities of LLMs.
### Source Data
#### Initial Data Collection and Normalization
Broadly, LegalBench tasks are drawn from three sources. The first source of tasks are existing available datasets and corpora. Most of these were originally released for non-LLM evaluation settings. In creating tasks for LegalBench from these sources, we often significantly reformatted data and restructured the prediction objective. For instance, the original [CUAD dataset](https://github.com/TheAtticusProject/cuad) contains annotations on long-documents and is intended for evaluating extraction with span-prediction models. We restructure this corpora to generate a binary classification task for each type of contractual clause. While the original corpus emphasized the long-document aspects of contracts, our restructured tasks emphasize whether LLMs can identify the distinguishing features of different types of clauses. The second source of tasks are datasets that were previously constructed by legal professionals but never released. This primarily includes datasets hand-coded by legal scholars as part of prior empirical legal projects. The last category of tasks are those that were developed specifically for \name, by the authors of this paper. Overall, tasks are drawn from 36 distinct corpora. Please see the Appendix of the paper for more details.
#### Who are the source language producers?
LegalBench data was created by humans. Demographic information for these individuals is not available.
### Annotations
#### Annotation process
Please see the paper for more information on the annotation process used in the creation of each task.
#### Who are the annotators?
Please see the paper for more information on the identity of annotators for each task.
### Personal and Sensitive Information
Data in this benchmark has either been synthetically generated, or derived from an already public source (e.g., contracts from the EDGAR database).
Several tasks have been derived from the LearnedHands corpus, which consists of public posts on /r/LegalAdvice. Some posts may discuss sensitive issues.
## Considerations for Using the Data
### Social Impact of Dataset
Please see the original paper for a discussion of social impact.
### Discussion of Biases
Please see the original paper for a discussion of social impact.
### Other Known Limitations
LegalBench primarily contains tasks corresponding to American law.
## Additional Information
### Dataset Curators
Please see the website for a full list of participants in the LegalBench project.
### Licensing Information
LegalBench tasks are subject to different licenses. Please see the paper for a description of the licenses.
### Citation Information
If you intend to reference LegalBench broadly, please use the citation below. If you are working with a particular task, please use the citation below in addition to the task specific citation (which can be found on the task page on the website or Github).
```
@misc{guha2023legalbench,
title={LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
author={Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
year={2023},
eprint={2308.11462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{koreeda2021contractnli,
title={ContractNLI: A dataset for document-level natural language inference for contracts},
author={Koreeda, Yuta and Manning, Christopher D},
journal={arXiv preprint arXiv:2110.01799},
year={2021}
}
@article{hendrycks2021cuad,
title={Cuad: An expert-annotated nlp dataset for legal contract review},
author={Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
@article{wang2023maud,
title={MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding},
author={Wang, Steven H and Scardigli, Antoine and Tang, Leonard and Chen, Wei and Levkin, Dimitry and Chen, Anya and Ball, Spencer and Woodside, Thomas and Zhang, Oliver and Hendrycks, Dan},
journal={arXiv preprint arXiv:2301.00876},
year={2023}
}
@inproceedings{wilson2016creation,
title={The creation and analysis of a website privacy policy corpus},
author={Wilson, Shomir and Schaub, Florian and Dara, Aswarth Abhilash and Liu, Frederick and Cherivirala, Sushain and Leon, Pedro Giovanni and Andersen, Mads Schaarup and Zimmeck, Sebastian and Sathyendra, Kanthashree Mysore and Russell, N Cameron and others},
booktitle={Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={1330--1340},
year={2016}
}
@inproceedings{zheng2021does,
title={When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings},
author={Zheng, Lucia and Guha, Neel and Anderson, Brandon R and Henderson, Peter and Ho, Daniel E},
booktitle={Proceedings of the eighteenth international conference on artificial intelligence and law},
pages={159--168},
year={2021}
}
@article{zimmeck2019maps,
title={Maps: Scaling privacy compliance analysis to a million apps},
author={Zimmeck, Sebastian and Story, Peter and Smullen, Daniel and Ravichander, Abhilasha and Wang, Ziqi and Reidenberg, Joel R and Russell, N Cameron and Sadeh, Norman},
journal={Proc. Priv. Enhancing Tech.},
volume={2019},
pages={66},
year={2019}
}
@article{ravichander2019question,
title={Question answering for privacy policies: Combining computational and legal perspectives},
author={Ravichander, Abhilasha and Black, Alan W and Wilson, Shomir and Norton, Thomas and Sadeh, Norman},
journal={arXiv preprint arXiv:1911.00841},
year={2019}
}
@article{holzenberger2021factoring,
title={Factoring statutory reasoning as language understanding challenges},
author={Holzenberger, Nils and Van Durme, Benjamin},
journal={arXiv preprint arXiv:2105.07903},
year={2021}
}
@article{lippi2019claudette,
title={CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service},
author={Lippi, Marco and Pa{\l}ka, Przemys{\l}aw and Contissa, Giuseppe and Lagioia, Francesca and Micklitz, Hans-Wolfgang and Sartor, Giovanni and Torroni, Paolo},
journal={Artificial Intelligence and Law},
volume={27},
pages={117--139},
year={2019},
publisher={Springer}
}
``` |
Open-Orca/OpenOrca | Open-Orca | "2025-02-19T07:32:36Z" | 10,749 | 1,381 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.02707",
"arxiv:2301.13688",
"arxiv:2302.13971",
"region:us"
] | [
"conversational",
"text-classification",
"token-classification",
"table-question-answering",
"question-answering",
"zero-shot-classification",
"summarization",
"feature-extraction",
"text-generation",
"text2text-generation"
] | "2023-06-15T18:16:11Z" | ---
language:
- en
license: mit
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrca
size_categories:
- 10M<n<100M
---
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
<p><h1>🐋 The OpenOrca Dataset! 🐋</h1></p>

<a name="dataset-announcement"></a>
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## Mistral-7B-OpenOrca
Our [latest model](https://huggingface.co/spaces/Open-Orca/Mistral-7B-OpenOrca), the first 7B to score better overall than all previous models below 30B.
98% of Llama2-70b-chat's performance, in a completely open 7B!
## OpenOrca-Platypus2-13B
Our [third model](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-attribution"></a>
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the data is primarily English.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/datasets/Open-Orca/OpenOrca}},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` |
common-canvas/commoncatalog-cc-by-nc-sa | common-canvas | "2024-05-16T19:45:25Z" | 10,737 | 4 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.16825",
"region:us"
] | [
"text-to-image"
] | "2023-10-19T02:09:41Z" | ---
license: cc-by-nc-sa-4.0
dataset_info:
features:
- name: jpg
dtype: image
- name: blip2_caption
dtype: string
- name: caption
dtype: string
- name: licensename
dtype: string
- name: licenseurl
dtype: string
- name: width
dtype: int32
- name: height
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: photoid
dtype: int64
- name: uid
dtype: string
- name: unickname
dtype: string
- name: datetaken
dtype: timestamp[us]
- name: dateuploaded
dtype: int64
- name: capturedevice
dtype: string
- name: title
dtype: string
- name: usertags
dtype: string
- name: machinetags
dtype: string
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: accuracy
dtype: int64
- name: pageurl
dtype: string
- name: downloadurl
dtype: string
- name: serverid
dtype: int64
- name: farmid
dtype: int64
- name: secret
dtype: string
- name: secretoriginal
dtype: string
- name: ext
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: string
- name: exif
dtype: string
- name: sha256
dtype: string
- name: description
dtype: string
task_categories:
- text-to-image
language:
- en
---
# Dataset Card for CommonCatalog CC-BY-NC-SA
This dataset is a large collection of high-resolution Creative Common images (composed of different licenses, see paper Table 1 in the Appendix) collected in 2014 from users of Yahoo Flickr.
The dataset contains images of up to 4k resolution, making this one of the highest resolution captioned image datasets.
## Dataset Details
### Dataset Description
We provide captions synthetic captions to approximately 100 million high resolution images collected from Yahoo Flickr Creative Commons (YFCC).
- **Curated by:** Aaron Gokaslan
- **Language(s) (NLP):** en
- **License:** See relevant yaml tag / dataset name.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/mosaicml/diffusion
- **Paper:** https://arxiv.org/abs/2310.16825
- **Demo:** See CommonCanvas Gradios
## Uses
We use CommonCatalog to train a family latent diffusion models called CommonCanvas.
The goal is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance.
Doing so makes replicating the model significantly easier, and provides a clearer mechanism for applying training-data attribution techniques.
### Direct Use
Training text-to-image models
Training image-to-text models
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
* Commercial use
* Crafting content that is offensive or injurious towards individuals, including negative portrayals of their living conditions, cultural backgrounds, religious beliefs, etc.
* Deliberately creating or spreading content that is discriminatory or reinforces harmful stereotypes.
* Falsely representing individuals without their permission.
* Generating sexual content that may be seen by individuals without their consent.
* Producing or disseminating false or misleading information.
* Creating content that depicts extreme violence or bloodshed.
* Distributing content that modifies copyrighted or licensed material in a way that breaches its usage terms.
## Dataset Structure
The dataset is divided into 10 subsets each containing parquets about 4GB each. Each subfolder within contains a resolution range of the images and their respective aspect ratios.
The dataset is also divided along images licensed for commercial use (C) and those that are not (NC).
## Dataset Creation
### Curation Rationale
Creating a standardized, accessible dataset with synthetic caption and releasing it so other people can train on a common dataset for open source image generation.
### Source Data
Yahoo Flickr Creative Commons 100M Dataset and Synthetically Generated Caption Data.
#### Data Collection and Processing
All synthetic captions were generated with BLIP2. See paper for more details.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Users of Flickr
## Bias, Risks, and Limitations
See Yahoo Flickr Creative Commons 100M dataset for more information. The information was collected circa 2014 and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Citation
**BibTeX:**
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
```
## Dataset Card Authors
[Aaron Gokaslan](https://huggingface.co/Skylion007)
## Dataset Card Contact
[Aaron Gokaslan](https://huggingface.co/Skylion007)
|
vicgalle/alpaca-gpt4 | vicgalle | "2024-02-10T10:03:45Z" | 10,701 | 280 | [
"task_categories:text-generation",
"task_categories:question-answering",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2304.03277",
"region:us",
"gpt4",
"alpaca",
"instruction-finetuning",
"synthetic"
] | [
"text-generation",
"conversational",
"question-answering"
] | "2023-04-07T16:22:59Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 88566301
num_examples: 52002
download_size: 48393562
dataset_size: 88566301
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
size_categories:
- 10K<n<100K
license: cc-by-nc-4.0
tags:
- gpt4
- alpaca
- instruction-finetuning
- synthetic
---
# Dataset Card for "alpaca-gpt4"
This dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM. This is just a wraper for compatibility with huggingface's datasets library.
## Dataset Description
- **Homepage:** https://instruction-tuning-with-gpt-4.github.io
- **Repository:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-4`.
- `text`: `str`, all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT4:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'The odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nThe odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.'}
```
#### Same example from original Alpaca:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'Telegram',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nTelegram'}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). |
qiaojin/PubMedQA | qiaojin | "2024-03-06T01:50:16Z" | 10,689 | 190 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1909.06146",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: pubmedqa
pretty_name: PubMedQA
config_names:
- pqa_artificial
- pqa_labeled
- pqa_unlabeled
dataset_info:
- config_name: pqa_artificial
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 443501057
num_examples: 211269
download_size: 233411194
dataset_size: 443501057
- config_name: pqa_labeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: reasoning_required_pred
dtype: string
- name: reasoning_free_pred
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 2088898
num_examples: 1000
download_size: 1075513
dataset_size: 2088898
- config_name: pqa_unlabeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
splits:
- name: train
num_bytes: 125922964
num_examples: 61249
download_size: 66010017
dataset_size: 125922964
configs:
- config_name: pqa_artificial
data_files:
- split: train
path: pqa_artificial/train-*
- config_name: pqa_labeled
data_files:
- split: train
path: pqa_labeled/train-*
- config_name: pqa_unlabeled
data_files:
- split: train
path: pqa_unlabeled/train-*
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PubMedQA homepage](https://pubmedqa.github.io/ )
- **Repository:** [PubMedQA repository](https://github.com/pubmedqa/pubmedqa)
- **Paper:** [PubMedQA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/abs/1909.06146)
- **Leaderboard:** [PubMedQA: Leaderboard](https://pubmedqa.github.io/)
### Dataset Summary
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts.
### Supported Tasks and Leaderboards
The official leaderboard is available at: https://pubmedqa.github.io/.
500 questions in the `pqa_labeled` are used as the test set. They can be found at https://github.com/pubmedqa/pubmedqa.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset. |
fixie-ai/peoples_speech | fixie-ai | "2024-08-11T17:26:01Z" | 10,640 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-05T18:35:01Z" | ---
dataset_info:
- config_name: clean
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 2511523987.692
num_examples: 18622
- name: test
num_bytes: 4259695510.794
num_examples: 34898
- name: train
num_bytes: 401646320552.671
num_examples: 1501271
download_size: 398922548670
dataset_size: 408417540051
- config_name: dirty_sa
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 144432442623.054
num_examples: 548014
- name: validation
num_bytes: 2511524241.692
num_examples: 18622
- name: test
num_bytes: 4259695588.794
num_examples: 34898
download_size: 149491764186
dataset_size: 151203662453.53998
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
- split: test
path: clean/test-*
- split: train
path: data/train-*
- config_name: dirty_sa
data_files:
- split: train
path: dirty_sa/train-*
- split: validation
path: dirty_sa/validation-*
- split: test
path: dirty_sa/test-*
---
|
xinrongzhang2022/InfiniteBench | xinrongzhang2022 | "2024-10-08T01:59:10Z" | 10,604 | 27 | [
"region:us"
] | null | "2023-11-16T09:29:02Z" | ---
configs:
- config_name: default
data_files:
- split: passkey
path: "passkey.jsonl"
- split: kv_retrieval
path: "kv_retrieval.jsonl"
- split: number_string
path: "number_string.jsonl"
- split: code_run
path: "code_run.jsonl"
- split: code_debug
path: "code_debug.jsonl"
- split: math_find
path: "math_find.jsonl"
- split: math_calc
path: "math_calc.jsonl"
- split: longdialogue_qa_eng
path: "longdialogue_qa_eng.jsonl"
- split: longbook_qa_eng
path: "longbook_qa_eng.jsonl"
- split: longbook_sum_eng
path: "longbook_sum_eng.jsonl"
- split: longbook_choice_eng
path: "longbook_choice_eng.jsonl"
- split: longbook_qa_chn
path: "longbook_qa_chn.jsonl"
---
---
license: apache-2.0
---
---
## Usage
load with datasets
```
from datasets import load_dataset, Features, Value, Sequence
# Define the features schema
ft = Features({
"id": Value("int64"),
"context": Value("string"),
"input": Value("string"),
"answer": Sequence(Value("string")),
"options": Sequence(Value("string"))
})
# Load the dataset with the specified features
dataset = load_dataset("xinrongzhang2022/InfiniteBench", features=ft)
```
## Citation
Please cite us if you use $\infty$Bench.
```bibtex
@inproceedings{zhang-etal-2024-bench,
title = "$\infty${B}ench: Extending Long Context Evaluation Beyond 100{K} Tokens",
author = "Zhang, Xinrong and
Chen, Yingfa and
Hu, Shengding and
Xu, Zihang and
Chen, Junhao and
Hao, Moo and
Han, Xu and
Thai, Zhen and
Wang, Shuo and
Liu, Zhiyuan and
Sun, Maosong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.814",
pages = "15262--15277",
abstract = "Processing and reasoning over long contexts is crucial for many practical applications of Large Language Models (LLMs), such as document comprehension and agent construction. Despite recent strides in making LLMs process contexts with more than 100K tokens, there is currently a lack of a standardized benchmark to evaluate this long-context capability. Existing public benchmarks typically focus on contexts around 10K tokens, limiting the assessment and comparison of LLMs in processing longer contexts. In this paper, we propose , the first LLM benchmark featuring an average data length surpassing 100K tokens. comprises synthetic and realistic tasks spanning diverse domains in English and Chinese. The tasks in are designed to require an understanding of long dependencies in contexts and make simply retrieving a limited number of passages from contexts not sufficient for these tasks. Based on , we evaluate several state-of-the-art LLMs tailored for processing long contexts. The experimental results indicate that existing long-context LLMs still require significant advancements to process 100K+ contexts effectively. Furthermore, we present three intriguing analyses regarding the behavior of LLMs processing long context. Our code and data is released.",
} |
saiyan-world/Goku-MovieGenBench | saiyan-world | "2025-02-11T03:18:05Z" | 10,595 | 201 | [
"task_categories:text-to-video",
"size_categories:1K<n<10K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"arxiv:2502.04896",
"region:us"
] | [
"text-to-video"
] | "2025-02-06T12:47:26Z" | ---
task_categories:
- text-to-video
---
This repository contains the data associated with the paper [Goku: Flow Based Video Generative Foundation Models](https://huggingface.co/papers/2502.04896).
Project page: https://saiyan-world.github.io/goku/ |
fjd/scannet-processed-test | fjd | "2023-03-29T04:13:39Z" | 10,576 | 1 | [
"license:cc-by-nc-4.0",
"modality:image",
"modality:text",
"region:us"
] | null | "2023-03-29T03:27:18Z" | ---
license: cc-by-nc-4.0
---
|
parler-tts/mls_eng | parler-tts | "2024-04-09T14:37:17Z" | 10,560 | 21 | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2012.03411",
"region:us"
] | [
"automatic-speech-recognition",
"text-to-speech",
"text-to-audio"
] | "2024-03-11T20:00:44Z" | ---
pretty_name: English MLS
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: multilingual-librispeech
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- text-to-speech
- text-to-audio
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: book_id
dtype: string
splits:
- name: dev
num_bytes: 249688889.909
num_examples: 3807
- name: test
num_bytes: 245938961
num_examples: 3769
- name: train
num_bytes: 707578913096
num_examples: 10808037
download_size: 705179367357
dataset_size: 708074540946.909
---
# Dataset Card for English MLS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](http://www.openslr.org/94)
- **Repository:** [Needs More Information]
- **Paper:** [MLS: A Large-Scale Multilingual Dataset for Speech Research](https://arxiv.org/abs/2012.03411)
- **Leaderboard:** [🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=facebook%2Fmultilingual_librispeech&only_verified=0&task=automatic-speech-recognition&config=-unspecified-&split=-unspecified-&metric=wer)
### Dataset Summary
This is a streamable version of the **English version of the Multilingual LibriSpeech (MLS) dataset**.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish. It includes about 44.5K hours of English and a total of about 6K hours for other languages.
This dataset card includes the 44.5K hours of English. Refers to this [dataset card](https://huggingface.co/datasets/facebook/multilingual_librispeech) for the other languages.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
- `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
```python
from datasets import load_dataset
mls = load_dataset("parler-tts/mls_eng", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
mls = load_dataset("parler-tts/mls_eng", split="train", streaming=True)
print(next(iter(mls)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
Local:
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
mls = load_dataset("parler-tts/mls_eng", split="train")
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
```
Streaming:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
mls = load_dataset("parler-tts/mls_eng", split="train", streaming=True)
dataloader = DataLoader(mls, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on MultiLingual Librispeech with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Fields
- file: A filename .flac format.
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
### Citation Information
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
### Data Statistics
| Duration (h) | Train | Dev | Test |
|--------------|-----------|-------|-------|
| English | 44,659.74 | 15.75 | 15.55 |
| German | 1,966.51 | 14.28 | 14.29 |
| Dutch | 1,554.24 | 12.76 | 12.76 |
| French | 1,076.58 | 10.07 | 10.07 |
| Spanish | 917.68 | 9.99 | 10 |
| Italian | 247.38 | 5.18 | 5.27 |
| Portuguese | 160.96 | 3.64 | 3.74 |
| Polish | 103.65 | 2.08 | 2.14 |
| # Speakers | Train | | Dev | | Test | |
|------------|-------|------|-----|----|------|----|
| Gender | M | F | M | F | M | F |
| English | 2742 | 2748 | 21 | 21 | 21 | 21 |
| German | 81 | 95 | 15 | 15 | 15 | 15 |
| Dutch | 9 | 31 | 3 | 3 | 3 | 3 |
| French | 62 | 80 | 9 | 9 | 9 | 9 |
| Spanish | 36 | 50 | 10 | 10 | 10 | 10 |
| Italian | 22 | 43 | 5 | 5 | 5 | 5 |
| Portuguese | 26 | 16 | 5 | 5 | 5 | 5 |
| Polish | 6 | 5 | 2 | 2 | 2 | 2 |
| # Hours / Gender | Dev | | Test | |
|------------------|------|------|------|------|
| Gender | M | F | M | F |
| English | 7.76 | 7.99 | 7.62 | 7.93 |
| German | 7.06 | 7.22 | 7 | 7.29 |
| Dutch | 6.44 | 6.32 | 6.72 | 6.04 |
| French | 5.13 | 4.94 | 5.04 | 5.02 |
| Spanish | 4.91 | 5.08 | 4.78 | 5.23 |
| Italian | 2.5 | 2.68 | 2.38 | 2.9 |
| Portuguese | 1.84 | 1.81 | 1.83 | 1.9 |
| Polish | 1.12 | 0.95 | 1.09 | 1.05 |
|
duongttr/vi-dataset-for-pretrain | duongttr | "2023-08-02T09:38:30Z" | 10,547 | 2 | [
"task_categories:text-generation",
"language:vi",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LM"
] | [
"text-generation"
] | "2023-08-02T08:20:06Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 77360702833
num_examples: 23891116
- name: validation
num_bytes: 4064634081
num_examples: 1257428
download_size: 2126869688
dataset_size: 81425336914
task_categories:
- text-generation
language:
- vi
size_categories:
- 10M<n<100M
tags:
- LM
---
# Dataset Card for "vi-dataset-for-pretrain"
This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.
The dataset consists of:
- [`vietgpt/covid_19_news_vi`](https://huggingface.co/datasets/vietgpt/covid_19_news_vi)
- [`hieunguyen1053/binhvq-news-corpus`](https://huggingface.co/datasets/hieunguyen1053/binhvq-news-corpus)
- [`oscar (unshuffled_deduplicated_vi)`](https://huggingface.co/datasets/oscar)
- [`vietgpt/wikipedia_vi`](https://huggingface.co/datasets/vietgpt/wikipedia_vi)
# Dataset info
| Splits | N.o examples | Size |
| --- | --- | --- |
| Train | 23,891,116 | 77.36 GB |
| Validation | 1,257,428 | 4.06 GB |
| **Total** | **25,148,544** | **81.43 GB** | |
AI-MO/aimo-validation-aime | AI-MO | "2024-07-10T12:44:42Z" | 10,514 | 41 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-09T11:17:14Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 520431
num_examples: 90
download_size: 261038
dataset_size: 520431
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for AIMO Validation AIME
All 90 problems come from AIME 22, AIME 23, and AIME 24, and have been extracted directly from the AOPS wiki page https://artofproblemsolving.com/wiki/index.php/AIME_Problems_and_Solutions
This dataset serves as an internal validation set during our participation in the AIMO progress prize competition. Using data after 2021 is to avoid potential overlap with the MATH training set.
Here are the different columns in the dataset:
- problem: the original problem statement from the website
- solution: one of the solutions proposed in the forum with \boxed answer
- url: url to the problem page in the website
|
textmachinelab/quail | textmachinelab | "2024-01-04T16:18:32Z" | 10,513 | 7 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"multiple-choice"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: quail
pretty_name: Question Answering for Artificial Intelligence (QuAIL)
dataset_info:
config_name: quail
features:
- name: id
dtype: string
- name: context_id
dtype: string
- name: question_id
dtype: string
- name: domain
dtype: string
- name: metadata
struct:
- name: author
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: question_type
dtype: string
- name: answers
sequence: string
- name: correct_answer_id
dtype: int32
splits:
- name: train
num_bytes: 23432601
num_examples: 10246
- name: validation
num_bytes: 4989531
num_examples: 2164
- name: challenge
num_bytes: 1199792
num_examples: 556
download_size: 2286403
dataset_size: 29621924
configs:
- config_name: quail
data_files:
- split: train
path: quail/train-*
- split: validation
path: quail/validation-*
- split: challenge
path: quail/challenge-*
default: true
---
# Dataset Card for "quail"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://text-machine-lab.github.io/blog/2020/quail/](https://text-machine-lab.github.io/blog/2020/quail/)
- **Repository:** https://github.com/text-machine-lab/quail
- **Paper:** [Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks](https://doi.org/10.1609/aaai.v34i05.6398 )
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 6.41 MB
- **Size of the generated dataset:** 29.62 MB
- **Total amount of disk used:** 36.03 MB
### Dataset Summary
QuAIL is a reading comprehension dataset. QuAIL contains 15K multi-choice questions in texts 300-350 tokens long 4 domains (news, user stories, fiction, blogs).QuAIL is balanced and annotated for question types.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### quail
- **Size of downloaded dataset files:** 6.41 MB
- **Size of the generated dataset:** 29.62 MB
- **Total amount of disk used:** 36.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": ["the cousin is not friendly", "the cousin could have been pretier", "not enough information", "the cousin was too nice"],
"context": "\"That fall came and I went back to Michigan and the school year went by and summer came and I never really thought about it. I'm...",
"context_id": "f001",
"correct_answer_id": 0,
"domain": "fiction",
"id": "f001_19",
"metadata": {
"author": "Joseph Devon",
"title": "Black Eyed Susan",
"url": "http://manybooks.net/pages/devonjother08black_eyed_susan/0.html"
},
"question": "After the events in the text what does the author think about the cousin?",
"question_id": "19",
"question_type": "Subsequent_state"
}
```
### Data Fields
The data fields are the same among all splits.
#### quail
- `id`: a `string` feature.
- `context_id`: a `string` feature.
- `question_id`: a `string` feature.
- `domain`: a `string` feature.
- `author`: a `string` feature.
- `title`: a `string` feature.
- `url`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `question_type`: a `string` feature.
- `answers`: a `list` of `string` features.
- `correct_answer_id`: a `int32` feature.
### Data Splits
|name |train|challenge|validation|
|-----|----:|--------:|---------:|
|quail|10246| 556| 2164|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{DBLP:conf/aaai/RogersKDR20,
author = {Anna Rogers and
Olga Kovaleva and
Matthew Downey and
Anna Rumshisky},
title = {Getting Closer to {AI} Complete Question Answering: {A} Set of Prerequisite
Real Tasks},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {8722--8731},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6398},
timestamp = {Thu, 04 Jun 2020 13:18:48 +0200},
biburl = {https://dblp.org/rec/conf/aaai/RogersKDR20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@sai-prasanna](https://github.com/sai-prasanna), [@ngdodd](https://github.com/ngdodd) for adding this dataset. |
airtrain-ai/fineweb-edu-fortified | airtrain-ai | "2024-08-08T18:04:44Z" | 10,513 | 55 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.17557",
"arxiv:2109.07445",
"region:us"
] | [
"text-generation"
] | "2024-07-22T14:22:31Z" | ---
language:
- en
license: odc-by
task_categories:
- text-generation
dataset_info:
- config_name: CC-MAIN-2013-20
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 71683996286
num_examples: 10800000
download_size: 55571546426
dataset_size: 71683996286
- config_name: CC-MAIN-2013-48
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 38878994623
num_examples: 5800000
download_size: 30087644388
dataset_size: 38878994623
- config_name: CC-MAIN-2014-10
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 24971658588
num_examples: 3550000
download_size: 19058832929
dataset_size: 24971658588
- config_name: CC-MAIN-2014-15
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 13615746365
num_examples: 1850000
download_size: 10299687552
dataset_size: 13615746365
- config_name: CC-MAIN-2014-23
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21798450754
num_examples: 3100000
download_size: 16663899441
dataset_size: 21798450754
- config_name: CC-MAIN-2014-35
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 10954201796
num_examples: 1500000
download_size: 8309419357
dataset_size: 10954201796
- config_name: CC-MAIN-2014-41
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 11392615401
num_examples: 1600000
download_size: 8694382261
dataset_size: 11392615401
- config_name: CC-MAIN-2014-42
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8491740156
num_examples: 1150000
download_size: 6430841610
dataset_size: 8491740156
- config_name: CC-MAIN-2014-49
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7754099049
num_examples: 1050000
download_size: 5866979308
dataset_size: 7754099049
- config_name: CC-MAIN-2014-52
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 9953666568
num_examples: 1350000
download_size: 7521103037
dataset_size: 9953666568
- config_name: CC-MAIN-2015-06
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8988649992
num_examples: 1200000
download_size: 6771650647
dataset_size: 8988649992
- config_name: CC-MAIN-2015-11
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 9212466984
num_examples: 1200000
download_size: 6893305603
dataset_size: 9212466984
- config_name: CC-MAIN-2015-14
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7773258320
num_examples: 1000000
download_size: 5810026390
dataset_size: 7773258320
- config_name: CC-MAIN-2015-18
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 9906342182
num_examples: 1300000
download_size: 7420897339
dataset_size: 9906342182
- config_name: CC-MAIN-2015-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8677092389
num_examples: 1100000
download_size: 6445775687
dataset_size: 8677092389
- config_name: CC-MAIN-2015-27
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8168934142
num_examples: 1050000
download_size: 6095866065
dataset_size: 8168934142
- config_name: CC-MAIN-2015-32
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7248096143
num_examples: 950000
download_size: 5438870914
dataset_size: 7248096143
- config_name: CC-MAIN-2015-35
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7905807405
num_examples: 1000000
download_size: 5886313414
dataset_size: 7905807405
- config_name: CC-MAIN-2015-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 6756795023
num_examples: 850000
download_size: 5020668048
dataset_size: 6756795023
- config_name: CC-MAIN-2015-48
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 9500987324
num_examples: 1200000
download_size: 7050820902
dataset_size: 9500987324
- config_name: CC-MAIN-2016-07
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 10612088943
num_examples: 1300000
download_size: 7816414470
dataset_size: 10612088943
- config_name: CC-MAIN-2016-18
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7478953157
num_examples: 1050000
download_size: 5691425154
dataset_size: 7478953157
- config_name: CC-MAIN-2016-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7617762727
num_examples: 1050000
download_size: 5760598348
dataset_size: 7617762727
- config_name: CC-MAIN-2016-26
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 4620338482
num_examples: 650000
download_size: 3516183695
dataset_size: 4620338482
- config_name: CC-MAIN-2016-30
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 10574077837
num_examples: 1250000
download_size: 7732067436
dataset_size: 10574077837
- config_name: CC-MAIN-2016-36
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8503905267
num_examples: 1000000
download_size: 6208206855
dataset_size: 8503905267
- config_name: CC-MAIN-2016-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 15377835627
num_examples: 2350000
download_size: 11940941268
dataset_size: 15377835627
- config_name: CC-MAIN-2016-44
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 29529872165
num_examples: 4800000
download_size: 23162984623
dataset_size: 29529872165
- config_name: CC-MAIN-2016-50
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20468372716
num_examples: 3050000
download_size: 15709742655
dataset_size: 20468372716
- config_name: CC-MAIN-2017-04
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21037186856
num_examples: 3050000
download_size: 16038345746
dataset_size: 21037186856
- config_name: CC-MAIN-2017-09
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 24443091987
num_examples: 3450000
download_size: 18578003959
dataset_size: 24443091987
- config_name: CC-MAIN-2017-13
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 42541966320
num_examples: 6350000
download_size: 32897843366
dataset_size: 42541966320
- config_name: CC-MAIN-2017-17
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 28067316341
num_examples: 4200000
download_size: 21670006912
dataset_size: 28067316341
- config_name: CC-MAIN-2017-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21612347473
num_examples: 3250000
download_size: 16727380174
dataset_size: 21612347473
- config_name: CC-MAIN-2017-26
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 26930164929
num_examples: 4150000
download_size: 21000453887
dataset_size: 26930164929
- config_name: CC-MAIN-2017-30
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 19514567064
num_examples: 3050000
download_size: 15274197942
dataset_size: 19514567064
- config_name: CC-MAIN-2017-34
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21825880789
num_examples: 3450000
download_size: 17131331406
dataset_size: 21825880789
- config_name: CC-MAIN-2017-39
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21861199076
num_examples: 3250000
download_size: 16864955620
dataset_size: 21861199076
- config_name: CC-MAIN-2017-43
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22225780468
num_examples: 3250000
download_size: 17081326644
dataset_size: 22225780468
- config_name: CC-MAIN-2017-47
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20302441730
num_examples: 2950000
download_size: 15588692671
dataset_size: 20302441730
- config_name: CC-MAIN-2017-51
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 17337207614
num_examples: 2550000
download_size: 13346917136
dataset_size: 17337207614
- config_name: CC-MAIN-2018-05
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22738512950
num_examples: 3450000
download_size: 17607554751
dataset_size: 22738512950
- config_name: CC-MAIN-2018-09
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23340323268
num_examples: 3600000
download_size: 18151119519
dataset_size: 23340323268
- config_name: CC-MAIN-2018-13
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 19001159420
num_examples: 2900000
download_size: 14753194653
dataset_size: 19001159420
- config_name: CC-MAIN-2018-17
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 17258341719
num_examples: 2600000
download_size: 13340501927
dataset_size: 17258341719
- config_name: CC-MAIN-2018-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 17491169826
num_examples: 2600000
download_size: 13470743712
dataset_size: 17491169826
- config_name: CC-MAIN-2018-26
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21455735998
num_examples: 3100000
download_size: 16280241314
dataset_size: 21455735998
- config_name: CC-MAIN-2018-30
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 18192174874
num_examples: 2500000
download_size: 13725747144
dataset_size: 18192174874
- config_name: CC-MAIN-2018-34
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 15796036932
num_examples: 2200000
download_size: 11987788874
dataset_size: 15796036932
- config_name: CC-MAIN-2018-39
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 16307757771
num_examples: 2200000
download_size: 12290791012
dataset_size: 16307757771
- config_name: CC-MAIN-2018-43
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 25677124234
num_examples: 3800000
download_size: 19573087580
dataset_size: 25677124234
- config_name: CC-MAIN-2018-47
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22875798193
num_examples: 3150000
download_size: 17281464409
dataset_size: 22875798193
- config_name: CC-MAIN-2018-51
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22594268378
num_examples: 3300000
download_size: 17343595987
dataset_size: 22594268378
- config_name: CC-MAIN-2019-04
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21133044139
num_examples: 3050000
download_size: 16192299666
dataset_size: 21133044139
- config_name: CC-MAIN-2019-09
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20593069774
num_examples: 2850000
download_size: 15604520079
dataset_size: 20593069774
- config_name: CC-MAIN-2019-13
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 18350086234
num_examples: 2500000
download_size: 13859628789
dataset_size: 18350086234
- config_name: CC-MAIN-2019-18
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 19748675634
num_examples: 2650000
download_size: 14875559796
dataset_size: 19748675634
- config_name: CC-MAIN-2019-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22315609811
num_examples: 3100000
download_size: 16925720280
dataset_size: 22315609811
- config_name: CC-MAIN-2019-26
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20009950205
num_examples: 2750000
download_size: 15138826344
dataset_size: 20009950205
- config_name: CC-MAIN-2019-30
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20153093525
num_examples: 2750000
download_size: 15229175301
dataset_size: 20153093525
- config_name: CC-MAIN-2019-35
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23793900737
num_examples: 3300000
download_size: 18011655759
dataset_size: 23793900737
- config_name: CC-MAIN-2019-39
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21250081982
num_examples: 2950000
download_size: 16107325180
dataset_size: 21250081982
- config_name: CC-MAIN-2019-43
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23381976513
num_examples: 3150000
download_size: 17578322332
dataset_size: 23381976513
- config_name: CC-MAIN-2019-47
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22916970895
num_examples: 3150000
download_size: 17302792952
dataset_size: 22916970895
- config_name: CC-MAIN-2019-51
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 19001480990
num_examples: 2600000
download_size: 14340161761
dataset_size: 19001480990
- config_name: CC-MAIN-2020-05
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21571233444
num_examples: 2950000
download_size: 16258182796
dataset_size: 21571233444
- config_name: CC-MAIN-2020-10
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21550911640
num_examples: 3000000
download_size: 16304815033
dataset_size: 21550911640
- config_name: CC-MAIN-2020-16
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23381117349
num_examples: 3300000
download_size: 17744530068
dataset_size: 23381117349
- config_name: CC-MAIN-2020-24
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 25046680820
num_examples: 3550000
download_size: 19043052442
dataset_size: 25046680820
- config_name: CC-MAIN-2020-29
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 28072475139
num_examples: 3900000
download_size: 21219908593
dataset_size: 28072475139
- config_name: CC-MAIN-2020-34
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23905419397
num_examples: 3300000
download_size: 18053065929
dataset_size: 23905419397
- config_name: CC-MAIN-2020-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 31964517781
num_examples: 4650000
download_size: 24445166342
dataset_size: 31964517781
- config_name: CC-MAIN-2020-45
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 28978958859
num_examples: 4150000
download_size: 22052543740
dataset_size: 28978958859
- config_name: CC-MAIN-2020-50
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 25828281117
num_examples: 3650000
download_size: 19596280713
dataset_size: 25828281117
- config_name: CC-MAIN-2021-04
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 32044317476
num_examples: 4450000
download_size: 24218057264
dataset_size: 32044317476
- config_name: CC-MAIN-2021-10
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 30664456445
num_examples: 4200000
download_size: 23053325617
dataset_size: 30664456445
- config_name: CC-MAIN-2021-17
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 33620957572
num_examples: 4450000
download_size: 25055730596
dataset_size: 33620957572
- config_name: CC-MAIN-2021-21
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 26740503282
num_examples: 3600000
download_size: 20011648584
dataset_size: 26740503282
- config_name: CC-MAIN-2021-25
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 29160290793
num_examples: 3950000
download_size: 21855396161
dataset_size: 29160290793
- config_name: CC-MAIN-2021-31
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 29149182919
num_examples: 3900000
download_size: 21785469714
dataset_size: 29149182919
- config_name: CC-MAIN-2021-39
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 33379845273
num_examples: 4550000
download_size: 25057576194
dataset_size: 33379845273
- config_name: CC-MAIN-2021-43
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 34332026077
num_examples: 4700000
download_size: 25789733401
dataset_size: 34332026077
- config_name: CC-MAIN-2021-49
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 31418299354
num_examples: 4350000
download_size: 23666249860
dataset_size: 31418299354
- config_name: CC-MAIN-2022-05
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 32596625853
num_examples: 4450000
download_size: 24458356127
dataset_size: 32596625853
- config_name: CC-MAIN-2022-21
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 47752322889
num_examples: 6550000
download_size: 35853678975
dataset_size: 47752322889
- config_name: CC-MAIN-2022-27
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 40292830914
num_examples: 5550000
download_size: 30279346466
dataset_size: 40292830914
- config_name: CC-MAIN-2022-33
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 34010483286
num_examples: 4750000
download_size: 25633769458
dataset_size: 34010483286
- config_name: CC-MAIN-2022-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 39211229907
num_examples: 5350000
download_size: 29318062267
dataset_size: 39211229907
- config_name: CC-MAIN-2022-49
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 40322136408
num_examples: 5450000
download_size: 30095433549
dataset_size: 40322136408
- config_name: CC-MAIN-2023-06
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 39078745132
num_examples: 5250000
download_size: 29058170760
dataset_size: 39078745132
- config_name: CC-MAIN-2023-14
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 76461834465
num_examples: 10050000
download_size: 56751401774
dataset_size: 76461834465
- config_name: CC-MAIN-2023-23
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 76112971386
num_examples: 9950000
download_size: 56347776355
dataset_size: 76112971386
- config_name: CC-MAIN-2023-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 63452197995
num_examples: 8100000
download_size: 46078925605
dataset_size: 63452197995
- config_name: CC-MAIN-2023-50
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 63566623396
num_examples: 8200000
download_size: 46245587660
dataset_size: 63566623396
- config_name: CC-MAIN-2024-10
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 43172700112
num_examples: 5750000
download_size: 31501561162
dataset_size: 43172700112
configs:
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: data/CC-MAIN-2013-20/train-*
- config_name: CC-MAIN-2013-48
data_files:
- split: train
path: data/CC-MAIN-2013-48/train-*
- config_name: CC-MAIN-2014-10
data_files:
- split: train
path: data/CC-MAIN-2014-10/train-*
- config_name: CC-MAIN-2014-15
data_files:
- split: train
path: data/CC-MAIN-2014-15/train-*
- config_name: CC-MAIN-2014-23
data_files:
- split: train
path: data/CC-MAIN-2014-23/train-*
- config_name: CC-MAIN-2014-35
data_files:
- split: train
path: data/CC-MAIN-2014-35/train-*
- config_name: CC-MAIN-2014-41
data_files:
- split: train
path: data/CC-MAIN-2014-41/train-*
- config_name: CC-MAIN-2014-42
data_files:
- split: train
path: data/CC-MAIN-2014-42/train-*
- config_name: CC-MAIN-2014-49
data_files:
- split: train
path: data/CC-MAIN-2014-49/train-*
- config_name: CC-MAIN-2014-52
data_files:
- split: train
path: data/CC-MAIN-2014-52/train-*
- config_name: CC-MAIN-2015-06
data_files:
- split: train
path: data/CC-MAIN-2015-06/train-*
- config_name: CC-MAIN-2015-11
data_files:
- split: train
path: data/CC-MAIN-2015-11/train-*
- config_name: CC-MAIN-2015-14
data_files:
- split: train
path: data/CC-MAIN-2015-14/train-*
- config_name: CC-MAIN-2015-18
data_files:
- split: train
path: data/CC-MAIN-2015-18/train-*
- config_name: CC-MAIN-2015-22
data_files:
- split: train
path: data/CC-MAIN-2015-22/train-*
- config_name: CC-MAIN-2015-27
data_files:
- split: train
path: data/CC-MAIN-2015-27/train-*
- config_name: CC-MAIN-2015-32
data_files:
- split: train
path: data/CC-MAIN-2015-32/train-*
- config_name: CC-MAIN-2015-35
data_files:
- split: train
path: data/CC-MAIN-2015-35/train-*
- config_name: CC-MAIN-2015-40
data_files:
- split: train
path: data/CC-MAIN-2015-40/train-*
- config_name: CC-MAIN-2015-48
data_files:
- split: train
path: data/CC-MAIN-2015-48/train-*
- config_name: CC-MAIN-2016-07
data_files:
- split: train
path: data/CC-MAIN-2016-07/train-*
- config_name: CC-MAIN-2016-18
data_files:
- split: train
path: data/CC-MAIN-2016-18/train-*
- config_name: CC-MAIN-2016-22
data_files:
- split: train
path: data/CC-MAIN-2016-22/train-*
- config_name: CC-MAIN-2016-26
data_files:
- split: train
path: data/CC-MAIN-2016-26/train-*
- config_name: CC-MAIN-2016-30
data_files:
- split: train
path: data/CC-MAIN-2016-30/train-*
- config_name: CC-MAIN-2016-36
data_files:
- split: train
path: data/CC-MAIN-2016-36/train-*
- config_name: CC-MAIN-2016-40
data_files:
- split: train
path: data/CC-MAIN-2016-40/train-*
- config_name: CC-MAIN-2016-44
data_files:
- split: train
path: data/CC-MAIN-2016-44/train-*
- config_name: CC-MAIN-2016-50
data_files:
- split: train
path: data/CC-MAIN-2016-50/train-*
- config_name: CC-MAIN-2017-04
data_files:
- split: train
path: data/CC-MAIN-2017-04/train-*
- config_name: CC-MAIN-2017-09
data_files:
- split: train
path: data/CC-MAIN-2017-09/train-*
- config_name: CC-MAIN-2017-13
data_files:
- split: train
path: data/CC-MAIN-2017-13/train-*
- config_name: CC-MAIN-2017-17
data_files:
- split: train
path: data/CC-MAIN-2017-17/train-*
- config_name: CC-MAIN-2017-22
data_files:
- split: train
path: data/CC-MAIN-2017-22/train-*
- config_name: CC-MAIN-2017-26
data_files:
- split: train
path: data/CC-MAIN-2017-26/train-*
- config_name: CC-MAIN-2017-30
data_files:
- split: train
path: data/CC-MAIN-2017-30/train-*
- config_name: CC-MAIN-2017-34
data_files:
- split: train
path: data/CC-MAIN-2017-34/train-*
- config_name: CC-MAIN-2017-39
data_files:
- split: train
path: data/CC-MAIN-2017-39/train-*
- config_name: CC-MAIN-2017-43
data_files:
- split: train
path: data/CC-MAIN-2017-43/train-*
- config_name: CC-MAIN-2017-47
data_files:
- split: train
path: data/CC-MAIN-2017-47/train-*
- config_name: CC-MAIN-2017-51
data_files:
- split: train
path: data/CC-MAIN-2017-51/train-*
- config_name: CC-MAIN-2018-05
data_files:
- split: train
path: data/CC-MAIN-2018-05/train-*
- config_name: CC-MAIN-2018-09
data_files:
- split: train
path: data/CC-MAIN-2018-09/train-*
- config_name: CC-MAIN-2018-13
data_files:
- split: train
path: data/CC-MAIN-2018-13/train-*
- config_name: CC-MAIN-2018-17
data_files:
- split: train
path: data/CC-MAIN-2018-17/train-*
- config_name: CC-MAIN-2018-22
data_files:
- split: train
path: data/CC-MAIN-2018-22/train-*
- config_name: CC-MAIN-2018-26
data_files:
- split: train
path: data/CC-MAIN-2018-26/train-*
- config_name: CC-MAIN-2018-30
data_files:
- split: train
path: data/CC-MAIN-2018-30/train-*
- config_name: CC-MAIN-2018-34
data_files:
- split: train
path: data/CC-MAIN-2018-34/train-*
- config_name: CC-MAIN-2018-39
data_files:
- split: train
path: data/CC-MAIN-2018-39/train-*
- config_name: CC-MAIN-2018-43
data_files:
- split: train
path: data/CC-MAIN-2018-43/train-*
- config_name: CC-MAIN-2018-47
data_files:
- split: train
path: data/CC-MAIN-2018-47/train-*
- config_name: CC-MAIN-2018-51
data_files:
- split: train
path: data/CC-MAIN-2018-51/train-*
- config_name: CC-MAIN-2019-04
data_files:
- split: train
path: data/CC-MAIN-2019-04/train-*
- config_name: CC-MAIN-2019-09
data_files:
- split: train
path: data/CC-MAIN-2019-09/train-*
- config_name: CC-MAIN-2019-13
data_files:
- split: train
path: data/CC-MAIN-2019-13/train-*
- config_name: CC-MAIN-2019-18
data_files:
- split: train
path: data/CC-MAIN-2019-18/train-*
- config_name: CC-MAIN-2019-22
data_files:
- split: train
path: data/CC-MAIN-2019-22/train-*
- config_name: CC-MAIN-2019-26
data_files:
- split: train
path: data/CC-MAIN-2019-26/train-*
- config_name: CC-MAIN-2019-30
data_files:
- split: train
path: data/CC-MAIN-2019-30/train-*
- config_name: CC-MAIN-2019-35
data_files:
- split: train
path: data/CC-MAIN-2019-35/train-*
- config_name: CC-MAIN-2019-39
data_files:
- split: train
path: data/CC-MAIN-2019-39/train-*
- config_name: CC-MAIN-2019-43
data_files:
- split: train
path: data/CC-MAIN-2019-43/train-*
- config_name: CC-MAIN-2019-47
data_files:
- split: train
path: data/CC-MAIN-2019-47/train-*
- config_name: CC-MAIN-2019-51
data_files:
- split: train
path: data/CC-MAIN-2019-51/train-*
- config_name: CC-MAIN-2020-05
data_files:
- split: train
path: data/CC-MAIN-2020-05/train-*
- config_name: CC-MAIN-2020-10
data_files:
- split: train
path: data/CC-MAIN-2020-10/train-*
- config_name: CC-MAIN-2020-16
data_files:
- split: train
path: data/CC-MAIN-2020-16/train-*
- config_name: CC-MAIN-2020-24
data_files:
- split: train
path: data/CC-MAIN-2020-24/train-*
- config_name: CC-MAIN-2020-29
data_files:
- split: train
path: data/CC-MAIN-2020-29/train-*
- config_name: CC-MAIN-2020-34
data_files:
- split: train
path: data/CC-MAIN-2020-34/train-*
- config_name: CC-MAIN-2020-40
data_files:
- split: train
path: data/CC-MAIN-2020-40/train-*
- config_name: CC-MAIN-2020-45
data_files:
- split: train
path: data/CC-MAIN-2020-45/train-*
- config_name: CC-MAIN-2020-50
data_files:
- split: train
path: data/CC-MAIN-2020-50/train-*
- config_name: CC-MAIN-2021-04
data_files:
- split: train
path: data/CC-MAIN-2021-04/train-*
- config_name: CC-MAIN-2021-10
data_files:
- split: train
path: data/CC-MAIN-2021-10/train-*
- config_name: CC-MAIN-2021-17
data_files:
- split: train
path: data/CC-MAIN-2021-17/train-*
- config_name: CC-MAIN-2021-21
data_files:
- split: train
path: data/CC-MAIN-2021-21/train-*
- config_name: CC-MAIN-2021-25
data_files:
- split: train
path: data/CC-MAIN-2021-25/train-*
- config_name: CC-MAIN-2021-31
data_files:
- split: train
path: data/CC-MAIN-2021-31/train-*
- config_name: CC-MAIN-2021-39
data_files:
- split: train
path: data/CC-MAIN-2021-39/train-*
- config_name: CC-MAIN-2021-43
data_files:
- split: train
path: data/CC-MAIN-2021-43/train-*
- config_name: CC-MAIN-2021-49
data_files:
- split: train
path: data/CC-MAIN-2021-49/train-*
- config_name: CC-MAIN-2022-05
data_files:
- split: train
path: data/CC-MAIN-2022-05/train-*
- config_name: CC-MAIN-2022-21
data_files:
- split: train
path: data/CC-MAIN-2022-21/train-*
- config_name: CC-MAIN-2022-27
data_files:
- split: train
path: data/CC-MAIN-2022-27/train-*
- config_name: CC-MAIN-2022-33
data_files:
- split: train
path: data/CC-MAIN-2022-33/train-*
- config_name: CC-MAIN-2022-40
data_files:
- split: train
path: data/CC-MAIN-2022-40/train-*
- config_name: CC-MAIN-2022-49
data_files:
- split: train
path: data/CC-MAIN-2022-49/train-*
- config_name: CC-MAIN-2023-06
data_files:
- split: train
path: data/CC-MAIN-2023-06/train-*
- config_name: CC-MAIN-2023-14
data_files:
- split: train
path: data/CC-MAIN-2023-14/train-*
- config_name: CC-MAIN-2023-23
data_files:
- split: train
path: data/CC-MAIN-2023-23/train-*
- config_name: CC-MAIN-2023-40
data_files:
- split: train
path: data/CC-MAIN-2023-40/train-*
- config_name: CC-MAIN-2023-50
data_files:
- split: train
path: data/CC-MAIN-2023-50/train-*
- config_name: CC-MAIN-2024-10
data_files:
- split: train
path: data/CC-MAIN-2024-10/train-*
---
# Fineweb-Edu-Fortified
<figure>
<img src="https://cdn-uploads.huggingface.co/production/uploads/646516d2200b583e1e50faf8/79yPdK79m9mA0cCz-3h4v.png" width="500" style="margin-left:auto; margin-right: auto"/>
<figcaption style="text-align: center; margin-left: auto; margin-right: auto; font-style: italic;">
The composition of fineweb-edu-fortified, produced by automatically clustering a 500k row sample in
<a href="https://app.airtrain.ai/dataset/c232b33f-4f4a-49a7-ba55-8167a5f433da/null/1/0"> Airtrain </a>
</figcaption>
</figure>
## What is it?
Fineweb-Edu-Fortified is a dataset derived from
[Fineweb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) by applying exact-match
deduplication across the whole dataset and producing an embedding for each row. The number of times
the text from each row appears is also included as a `count` column. The embeddings were produced
using [TaylorAI/bge-micro](https://huggingface.co/TaylorAI/bge-micro)
Fineweb and Fineweb-Edu were obtained by processing data from 95 crawls of
[Common Crawl](https://commoncrawl.org/), covering a time period from 2013 to 2024.
More information about the original datasets can be found by consulting:
- [Fineweb-edu dataset card](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
- [Fineweb dataset card](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
- [Fineweb release blog post](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1)
- [Fineweb paper](https://arxiv.org/abs/2406.17557)
The contents of a randomly selected 500k rows from this dataset can be interactively
explored in this
[Airtrain](https://app.airtrain.ai/dataset/c232b33f-4f4a-49a7-ba55-8167a5f433da/null/1/0)
dashboard.
## Deduplication
### Deduplication in original Fineweb and Fineweb-Edu
During creation of the original Fineweb dataset, a variety of deduplication strategies were
explored. The evaluation criteria used to assess deduplication strategies was to train ablation models
on randomly selected subsets of the data, using a subset of up to ~350 billion tokens.
Using this mechanism, the Fineweb authors selected a MinHash algorithm, using parameters
considering documents with approximately 75% similarity or higher to be duplicates. This deduplication was
performed *within* each Common Crawl crawl. For example, it would have removed all approximate duplicates
from the 20th crawl from 2013, but would have retained an identical record that showed up
in both the 2013-20 crawl and the 2013-48 crawl. The authors note that applying the
deduplication *across crawls* reduced the evaluation performance of the ablation models used
for assessment. The proposed reason for this performance degredation is that data
duplicated across crawls is more likely to be high-quality compared to data that is not,
so leaving in the duplicates effectively upsamples the higer-quality data.
Following deduplication in Fineweb, Fineweb-Edu was extracted using a model-based quality classifier
targeting educational content. It thus inherited the same inter-crawl deduplication strategy of Fineweb.
### Deduplication in this dataset
#### Motivation
Given the findings that cross-crawl deduplication reduced ablation model performance, one might ask
what the motivation is for producing a dataset that uses it. Our motivation was threefold:
- Reduce the number of rows that needed to be embedded by avoiding embedding of exact-match content
- Enable easier filtering of the dataset for subsets-of-interest
- Provide a version of the dataset for users whose training goals include avoiding training on non-unique
tokens.
For use cases that would benefit from "re-hydrating" or filtering the rows based on how frequently
the text appeared in the original dataset, the new `count` column retains the number of appearances
of the associated text.
#### Procedure
The overall procedure was to remove exact matches that appeared in multiple crawls (also referred to
as "dumps"). This was achieved by performing an md5 hash on the text column and removing rows with
duplicate hashes. To make this tractable at scale, we first grouped all rows by the first two hex
digits of their hashes, then looked for exact hash matches within each of the resulting 256
buckets of data. Note that unlike the intra-crawl deduplication, we only eliminated exact matches
across crawls. For duplicated rows, a strong preference was given to keep the metadata
(ex: dump, url) from the oldest crawl where the text appeared. Following deduplication and
embedding, the data were grouped by the "dump" column, mirroring the organization of the original
Fineweb-Edu dataset.
### Deduplication stats
Deduplication removed approximately 74.7% of rows from the original dataset
(from 1.279 billion in Fineweb-Edu to 0.324 billion rows in Fineweb-Edu-Fortified).
This indicates that a substantial amount of data in Fineweb-Edu is present across multiple crawls.
The total token count in the deduplicated dataset is approximately 375 billion, compared to the
1,320 billion tokens in Fineweb-Edu.
<figure>
<img src="https://cdn-uploads.huggingface.co/production/uploads/646516d2200b583e1e50faf8/mUFyO1fUWJEXbYwiteR9e.png" width="750" style="margin-left:auto; margin-right: auto"/>
<figcaption style="text-align: center; margin-left: auto; margin-right: auto; font-style: italic;">
A histogram of the `count` column. Histogram was generated using a 500k row sample after
performing global per-row text duplication counting.
</figcaption>
</figure>
## Embeddings
To support use cases with Fineweb-Edu such as classification, clustering, semantic search, etc.,
we have produced an embedding vector for each row in the dataset. The embedding model
[TaylorAI/bge-micro](https://huggingface.co/TaylorAI/bge-micro)
was selected for its tradeoff of strong performance on [MTEB](https://huggingface.co/spaces/mteb/leaderboard)
benchmarks relative to its size (17 million parameters). The model's embedding space
has 384 dimensions. The context-window of the model is 512 tokens (roughly several paragraphs of text);
each row is embedded by using the first 512 tokens in its text field. Producing the embeddings took approximately
412 GPU-hours on Nvidia T4 GPUs.
## Using via `datasets`
```python
from datasets import load_dataset
fw = load_dataset("airtrain-ai/fineweb-edu-fortified", name="CC-MAIN-2024-10", split="train", streaming=True)
```
## Considerations for Using the Data
This "Considerations" section is copied from the parent dataset:
[FineWeb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
### Social Impact of Dataset
With the release of this dataset we aim to make model training more accessible to the machine learning community at large.
While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with 🍷 FineWeb we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.
### Discussion of Biases
Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As 🍷 FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.
We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a “gold” source such as wikipedia or toxicity classifiers as these methods have been known to [disproportionately remove content in specific dialects](https://aclanthology.org/D16-1120/) and [overclassify as toxic text related to specific social identities](https://arxiv.org/pdf/2109.07445.pdf), respectively.
### Other Known Limitations
As a consequence of some of the filtering steps applied, it is likely that code content is not prevalent in our dataset. If you are training a model that should also perform code tasks, we recommend you use 🍷 FineWeb with a code dataset, such as [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2). You should also probably consider complementing 🍷 FineWeb with specialized curated sources (such as Wikipedia, for example) as they will likely have better formatting than the wikipedia content included in 🍷 FineWeb (we did not tailor the processing to individual websites).
## Additional Information
### Acknowledgements
Airtrain would like to thank the Fineweb/Fineweb-Edu team at Hugging Face for producing the original datasets,
as well as for their support during work on Fineweb-Edu-Fortified.
We'd also like to thank [@underspirit](https://huggingface.co/underspirit) for
[pointing out](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu/discussions/7)
the amount of reduction in dataset size that could be achieved via deduplication.
We owe gratitude to [TaylorAI](https://huggingface.co/TaylorAI) for the `bge-micro` embedding model.
Finally, thank you to the Hugging Face community for fostering a thriving ecosystem of models, datasets, and tools
to support open-source AI.
### Licensing Information
The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0** [license](https://opendatacommons.org/licenses/by/1-0/). The use of this dataset is also subject to [CommonCrawl's Terms of Use](https://commoncrawl.org/terms-of-use).
|
EleutherAI/drop | EleutherAI | "2025-01-10T23:56:02Z" | 10,472 | 1 | [
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-08-30T10:15:08Z" | ---
license: cc-by-4.0
--- |
artefactory/Argimi-Ardian-Finance-10k-text-image | artefactory | "2025-01-06T09:47:20Z" | 10,442 | 6 | [
"task_categories:text-retrieval",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"region:us",
"finance"
] | [
"text-retrieval",
"text-generation"
] | "2024-11-29T13:26:42Z" | ---
license: cc-by-4.0
task_categories:
- text-retrieval
- text-generation
language:
- en
tags:
- finance
size_categories:
- 10K<n<100K
---
# The ArGiMI Ardian datasets : text and images

The ArGiMi project is committed to open-source principles and data sharing.
Thanks to our generous partners, we are releasing several valuable datasets to the public.
## Dataset description
This dataset comprises 11,000 financial annual reports, written in english, meticulously
extracted from their original PDF format to provide a valuable resource for researchers and developers in financial
analysis and natural language processing (NLP). These reports were published from the late 90s to 2023.
This dataset provides images of each document pages. A lighter, **text-only version**, is also available at
[`artefactory/Argimi-Ardian-Finance-10k-text`](https://huggingface.co/datasets/artefactory/Argimi-Ardian-Finance-10k-text).
You can load the dataset with:
```python
from datasets import load_dataset
ds = load_dataset("artefactory/Argimi-Ardian-Finance-10k-text-image", split="train")
# Or you can stream the dataset to save memory space :
ds = load_dataset("artefactory/Argimi-Ardian-Finance-10k-text-image", split="train", streaming=True)
```
## Dataset composition:
* Each PDF was divided into **individual pages** to facilitate granular analysis.
* For each page, the following data points were extracted:
* **Raw Text:** The complete textual content of the page, capturing all textual information present.
* **Screenshot:** A high-resolution image of the page, preserving the visual layout and formatting.
* **Cells:** Each cell within tables was identified and represented as a `Cell` object within the `docling` framework. Each `Cell` object encapsulates:
* `id`: A unique identifier assigned to each cell, ensuring unambiguous referencing.
* `text`: The textual content contained within the cell.
* `bbox`: The precise bounding box coordinates of the cell, defining its location and dimensions on the page.
* When OCR is employed, cells are further represented as `OcrCell` objects, which include an additional `confidence` attribute. This attribute quantifies the confidence level of the OCR process in accurately recognizing the cell's textual content.
* **Segments:** Beyond individual cells, `docling` segments each page into distinct content units, each represented as a `Segment` object. These segments provide a structured representation of the document's layout and content, encompassing elements such as tables, headers, paragraphs, and other structural components. Each `Segment` object contains:
* `text`: The textual content of the segment.
* `bbox`: The bounding box coordinates, specifying the segment's position and size on the page.
* `label`: A categorical label indicating the type of content the segment represents (e.g., "table," "header," "paragraph").
* To guarantee unique identification, each document is assigned a distinct identifier derived from the hash of its content.
## Parsing description:
The dataset creation involved a systematic process using the `docling` library ([Documentation](https://ds4sd.github.io/docling/)).
* PDFs were processed using the `DocumentConverter` class, employing the `PyPdfiumDocumentBackend` for handling of the PDF format.
* To ensure high-quality extraction, the following `PdfPipelineOptions` were configured:
```python
pipeline_options = PdfPipelineOptions(ocr_options=EasyOcrOptions(use_gpu=True))
pipeline_options.images_scale = 2.0 # Scale image resolution by a factor of 2
pipeline_options.generate_page_images = True # Generate page images
pipeline_options.do_ocr = True # Perform OCR
pipeline_options.do_table_structure = True # Extract table structure
pipeline_options.table_structure_options.do_cell_matching = True # Perform cell matching in tables
pipeline_options.table_structure_options.mode = TableFormerMode.ACCURATE # Use accurate mode for table structure extraction
```
* These options collectively enable:
* GPU-accelerated Optical Character Recognition (OCR) via `EasyOcr`.
* Upscaling of image resolution by a factor of 2, enhancing the clarity of visual elements.
* Generation of page images, providing a visual representation of each page within the document.
* Comprehensive table structure extraction, including cell matching, to accurately capture tabular data within the reports.
* The "accurate" mode for table structure extraction, prioritizing precision in identifying and delineating tables.
## Disclaimer:
This dataset, made available for experimental purposes as part of the ArGiMi research project, is provided "as is"
for informational purposes only. The original publicly available data was provided by Ardian.
Artefact has processed this dataset and now publicly releases it through Ardian, with Ardian's agreement.
None of ArGiMi, Artefact, or Ardian make any representations or warranties of any kind (express or implied) regarding the completeness,
accuracy, reliability, suitability, or availability of the dataset or its contents.
Any reliance you place on such information is strictly at your own risk.
In no event shall ArGiMi, Artefact, or Ardian be liable for any loss or damage, including without limitation,
indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of,
or in connection with, the use of this dataset. This disclaimer includes, but is not limited to,
claims relating to intellectual property infringement, negligence, breach of contract, and defamation.
## Acknowledgement:
The ArGiMi consortium gratefully acknowledges Ardian for their invaluable contribution in gathering the documents that
comprise this dataset. Their effort and collaboration were essential in enabling the creation and release of this dataset for public use.
The ArGiMi project is a collaborative project with Giskard, Mistral, INA and BnF, and is sponsored by the
France 2030 program of the French Government.
## Citation:
If you find our datasets useful for your research, consider citing us in your works:
```latex
@misc{argimi2024Datasets,
title={The ArGiMi datasets},
author={Hicham Randrianarivo, Charles Moslonka, Arthur Garnier and Emmanuel Malherbe},
year={2024},
}
``` |
ilsp/mmlu_greek | ilsp | "2024-05-20T12:36:54Z" | 10,419 | 4 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-04-01T14:53:41Z" | ---
dataset_info:
- config_name: abstract_algebra
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 58157
num_examples: 100
- name: validation
num_bytes: 6010
num_examples: 11
- name: dev
num_bytes: 2497
num_examples: 5
download_size: 0
dataset_size: 66664
- config_name: all
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 20041347
num_examples: 14042
- name: validation
num_bytes: 2196992
num_examples: 1531
- name: dev
num_bytes: 360807
num_examples: 285
download_size: 10333898
dataset_size: 22599146
- config_name: anatomy
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 97333
num_examples: 135
- name: validation
num_bytes: 9131
num_examples: 14
- name: dev
num_bytes: 2731
num_examples: 5
download_size: 67694
dataset_size: 109195
- config_name: astronomy
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 141580
num_examples: 152
- name: validation
num_bytes: 15462
num_examples: 16
- name: dev
num_bytes: 6380
num_examples: 5
download_size: 95251
dataset_size: 163422
- config_name: business_ethics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 101936
num_examples: 100
- name: validation
num_bytes: 9096
num_examples: 11
- name: dev
num_bytes: 6368
num_examples: 5
download_size: 77394
dataset_size: 117400
- config_name: clinical_knowledge
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 193539
num_examples: 265
- name: validation
num_bytes: 20500
num_examples: 29
- name: dev
num_bytes: 3720
num_examples: 5
download_size: 126056
dataset_size: 217759
- config_name: college_biology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 152394
num_examples: 144
- name: validation
num_bytes: 14995
num_examples: 16
- name: dev
num_bytes: 4638
num_examples: 5
download_size: 105576
dataset_size: 172027
- config_name: college_chemistry
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 72251
num_examples: 100
- name: validation
num_bytes: 6677
num_examples: 8
- name: dev
num_bytes: 3862
num_examples: 5
download_size: 61210
dataset_size: 82790
- config_name: college_computer_science
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 135321
num_examples: 100
- name: validation
num_bytes: 15037
num_examples: 11
- name: dev
num_bytes: 8606
num_examples: 5
download_size: 101342
dataset_size: 158964
- config_name: college_mathematics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 74448
num_examples: 100
- name: validation
num_bytes: 8274
num_examples: 11
- name: dev
num_bytes: 4276
num_examples: 5
download_size: 63556
dataset_size: 86998
- config_name: college_medicine
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 251805
num_examples: 173
- name: validation
num_bytes: 24431
num_examples: 22
- name: dev
num_bytes: 5031
num_examples: 5
download_size: 144635
dataset_size: 281267
- config_name: college_physics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 90708
num_examples: 102
- name: validation
num_bytes: 10367
num_examples: 11
- name: dev
num_bytes: 4139
num_examples: 5
download_size: 68341
dataset_size: 105214
- config_name: computer_security
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 86922
num_examples: 100
- name: validation
num_bytes: 14003
num_examples: 11
- name: dev
num_bytes: 3445
num_examples: 5
download_size: 75244
dataset_size: 104370
- config_name: conceptual_physics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 127706
num_examples: 235
- name: validation
num_bytes: 14286
num_examples: 26
- name: dev
num_bytes: 2978
num_examples: 5
download_size: 82813
dataset_size: 144970
- config_name: econometrics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 136916
num_examples: 114
- name: validation
num_bytes: 14730
num_examples: 12
- name: dev
num_bytes: 4794
num_examples: 5
download_size: 86025
dataset_size: 156440
- config_name: electrical_engineering
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 80296
num_examples: 145
- name: validation
num_bytes: 9138
num_examples: 16
- name: dev
num_bytes: 2824
num_examples: 5
download_size: 62008
dataset_size: 92258
- config_name: elementary_mathematics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 211831
num_examples: 378
- name: validation
num_bytes: 27305
num_examples: 41
- name: dev
num_bytes: 4252
num_examples: 5
download_size: 131272
dataset_size: 243388
- config_name: formal_logic
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 146101
num_examples: 126
- name: validation
num_bytes: 18160
num_examples: 14
- name: dev
num_bytes: 4917
num_examples: 5
download_size: 77094
dataset_size: 169178
- config_name: global_facts
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 55953
num_examples: 100
- name: validation
num_bytes: 5672
num_examples: 10
- name: dev
num_bytes: 3547
num_examples: 5
download_size: 0
dataset_size: 65172
- config_name: high_school_biology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 338155
num_examples: 310
- name: validation
num_bytes: 33555
num_examples: 32
- name: dev
num_bytes: 4992
num_examples: 5
download_size: 200936
dataset_size: 376702
- config_name: high_school_chemistry
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 170771
num_examples: 203
- name: validation
num_bytes: 20157
num_examples: 22
- name: dev
num_bytes: 3387
num_examples: 5
download_size: 108321
dataset_size: 194315
- config_name: high_school_computer_science
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 139128
num_examples: 100
- name: validation
num_bytes: 10800
num_examples: 9
- name: dev
num_bytes: 9269
num_examples: 5
download_size: 99359
dataset_size: 159197
- config_name: high_school_european_history
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 799080
num_examples: 165
- name: validation
num_bytes: 88740
num_examples: 18
- name: dev
num_bytes: 34585
num_examples: 5
download_size: 503439
dataset_size: 922405
- config_name: high_school_geography
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 132655
num_examples: 198
- name: validation
num_bytes: 13612
num_examples: 22
- name: dev
num_bytes: 4597
num_examples: 5
download_size: 90939
dataset_size: 150864
- config_name: high_school_government_and_politics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 215224
num_examples: 193
- name: validation
num_bytes: 22888
num_examples: 21
- name: dev
num_bytes: 5640
num_examples: 5
download_size: 132695
dataset_size: 243752
- config_name: high_school_macroeconomics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 374553
num_examples: 390
- name: validation
num_bytes: 41817
num_examples: 43
- name: dev
num_bytes: 4310
num_examples: 5
download_size: 177813
dataset_size: 420680
- config_name: high_school_mathematics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 161023
num_examples: 270
- name: validation
num_bytes: 17224
num_examples: 29
- name: dev
num_bytes: 3682
num_examples: 5
download_size: 105683
dataset_size: 181929
- config_name: high_school_microeconomics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 241816
num_examples: 238
- name: validation
num_bytes: 24317
num_examples: 26
- name: dev
num_bytes: 4029
num_examples: 5
download_size: 125789
dataset_size: 270162
- config_name: high_school_physics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 175856
num_examples: 151
- name: validation
num_bytes: 19899
num_examples: 17
- name: dev
num_bytes: 4348
num_examples: 5
download_size: 109639
dataset_size: 200103
- config_name: high_school_psychology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 494955
num_examples: 545
- name: validation
num_bytes: 53743
num_examples: 60
- name: dev
num_bytes: 5900
num_examples: 5
download_size: 285730
dataset_size: 554598
- config_name: high_school_statistics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 333736
num_examples: 216
- name: validation
num_bytes: 30252
num_examples: 23
- name: dev
num_bytes: 7320
num_examples: 5
download_size: 191017
dataset_size: 371308
- config_name: high_school_us_history
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 883614
num_examples: 204
- name: validation
num_bytes: 93694
num_examples: 22
- name: dev
num_bytes: 26282
num_examples: 5
download_size: 533320
dataset_size: 1003590
- config_name: high_school_world_history
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 1126143
num_examples: 237
- name: validation
num_bytes: 135245
num_examples: 26
- name: dev
num_bytes: 14589
num_examples: 5
download_size: 662773
dataset_size: 1275977
- config_name: human_aging
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 145275
num_examples: 223
- name: validation
num_bytes: 15038
num_examples: 23
- name: dev
num_bytes: 3062
num_examples: 5
download_size: 99856
dataset_size: 163375
- config_name: human_sexuality
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 100379
num_examples: 131
- name: validation
num_bytes: 7585
num_examples: 12
- name: dev
num_bytes: 3504
num_examples: 5
download_size: 74540
dataset_size: 111468
- config_name: international_law
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 162013
num_examples: 121
- name: validation
num_bytes: 18937
num_examples: 13
- name: dev
num_bytes: 7290
num_examples: 5
download_size: 0
dataset_size: 188240
- config_name: jurisprudence
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 102393
num_examples: 108
- name: validation
num_bytes: 11049
num_examples: 11
- name: dev
num_bytes: 3754
num_examples: 5
download_size: 21545
dataset_size: 117196
- config_name: logical_fallacies
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 153973
num_examples: 163
- name: validation
num_bytes: 15857
num_examples: 18
- name: dev
num_bytes: 4919
num_examples: 5
download_size: 82298
dataset_size: 174749
- config_name: machine_learning
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 102745
num_examples: 112
- name: validation
num_bytes: 9797
num_examples: 11
- name: dev
num_bytes: 7448
num_examples: 5
download_size: 70870
dataset_size: 119990
- config_name: management
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 63772
num_examples: 103
- name: validation
num_bytes: 5671
num_examples: 11
- name: dev
num_bytes: 2677
num_examples: 5
download_size: 52323
dataset_size: 72120
- config_name: marketing
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 191635
num_examples: 234
- name: validation
num_bytes: 22377
num_examples: 25
- name: dev
num_bytes: 4734
num_examples: 5
download_size: 122877
dataset_size: 218746
- config_name: medical_genetics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 64177
num_examples: 100
- name: validation
num_bytes: 9298
num_examples: 11
- name: dev
num_bytes: 3405
num_examples: 5
download_size: 58337
dataset_size: 76880
- config_name: miscellaneous
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 443155
num_examples: 783
- name: validation
num_bytes: 42990
num_examples: 86
- name: dev
num_bytes: 1877
num_examples: 5
download_size: 283087
dataset_size: 488022
- config_name: moral_disputes
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 332269
num_examples: 346
- name: validation
num_bytes: 38501
num_examples: 38
- name: dev
num_bytes: 5222
num_examples: 5
download_size: 193075
dataset_size: 375992
- config_name: moral_scenarios
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 1061634
num_examples: 895
- name: validation
num_bytes: 120664
num_examples: 100
- name: dev
num_bytes: 5816
num_examples: 5
download_size: 283716
dataset_size: 1188114
- config_name: nutrition
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 281680
num_examples: 306
- name: validation
num_bytes: 25350
num_examples: 33
- name: dev
num_bytes: 6423
num_examples: 5
download_size: 168790
dataset_size: 313453
- config_name: philosophy
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 240333
num_examples: 311
- name: validation
num_bytes: 27480
num_examples: 34
- name: dev
num_bytes: 2986
num_examples: 5
download_size: 153970
dataset_size: 270799
- config_name: prehistory
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 267644
num_examples: 324
- name: validation
num_bytes: 30414
num_examples: 35
- name: dev
num_bytes: 5577
num_examples: 5
download_size: 172053
dataset_size: 303635
- config_name: professional_accounting
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 377751
num_examples: 282
- name: validation
num_bytes: 42879
num_examples: 31
- name: dev
num_bytes: 6331
num_examples: 5
download_size: 228950
dataset_size: 426961
- config_name: professional_law
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 5612166
num_examples: 1534
- name: validation
num_bytes: 604980
num_examples: 170
- name: dev
num_bytes: 19825
num_examples: 5
download_size: 3065337
dataset_size: 6236971
- config_name: professional_medicine
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 639421
num_examples: 272
- name: validation
num_bytes: 70186
num_examples: 31
- name: dev
num_bytes: 11017
num_examples: 5
download_size: 391893
dataset_size: 720624
- config_name: professional_psychology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 687869
num_examples: 612
- name: validation
num_bytes: 87912
num_examples: 69
- name: dev
num_bytes: 6693
num_examples: 5
download_size: 405705
dataset_size: 782474
- config_name: public_relations
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 89435
num_examples: 110
- name: validation
num_bytes: 14174
num_examples: 12
- name: dev
num_bytes: 4718
num_examples: 5
download_size: 0
dataset_size: 108327
- config_name: security_studies
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 632255
num_examples: 245
- name: validation
num_bytes: 69100
num_examples: 27
- name: dev
num_bytes: 16171
num_examples: 5
download_size: 0
dataset_size: 717526
- config_name: sociology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 204018
num_examples: 201
- name: validation
num_bytes: 22531
num_examples: 22
- name: dev
num_bytes: 5054
num_examples: 5
download_size: 9676
dataset_size: 231603
- config_name: us_foreign_policy
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 89965
num_examples: 100
- name: validation
num_bytes: 10270
num_examples: 11
- name: dev
num_bytes: 5111
num_examples: 5
download_size: 68974
dataset_size: 105346
- config_name: virology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 116211
num_examples: 166
- name: validation
num_bytes: 16273
num_examples: 18
- name: dev
num_bytes: 3185
num_examples: 5
download_size: 96586
dataset_size: 135669
- config_name: world_religions
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 77273
num_examples: 171
- name: validation
num_bytes: 8462
num_examples: 19
- name: dev
num_bytes: 2073
num_examples: 5
download_size: 61169
dataset_size: 87808
configs:
- config_name: abstract_algebra
data_files:
- split: test
path: abstract_algebra/test-*
- split: validation
path: abstract_algebra/validation-*
- split: dev
path: abstract_algebra/dev-*
- config_name: all
data_files:
- split: test
path: all/test-*
- split: validation
path: all/validation-*
- split: dev
path: all/dev-*
- config_name: anatomy
data_files:
- split: test
path: anatomy/test-*
- split: validation
path: anatomy/validation-*
- split: dev
path: anatomy/dev-*
- config_name: astronomy
data_files:
- split: test
path: astronomy/test-*
- split: validation
path: astronomy/validation-*
- split: dev
path: astronomy/dev-*
- config_name: business_ethics
data_files:
- split: test
path: business_ethics/test-*
- split: validation
path: business_ethics/validation-*
- split: dev
path: business_ethics/dev-*
- config_name: clinical_knowledge
data_files:
- split: test
path: clinical_knowledge/test-*
- split: validation
path: clinical_knowledge/validation-*
- split: dev
path: clinical_knowledge/dev-*
- config_name: college_biology
data_files:
- split: test
path: college_biology/test-*
- split: validation
path: college_biology/validation-*
- split: dev
path: college_biology/dev-*
- config_name: college_chemistry
data_files:
- split: test
path: college_chemistry/test-*
- split: validation
path: college_chemistry/validation-*
- split: dev
path: college_chemistry/dev-*
- config_name: college_computer_science
data_files:
- split: test
path: college_computer_science/test-*
- split: validation
path: college_computer_science/validation-*
- split: dev
path: college_computer_science/dev-*
- config_name: college_mathematics
data_files:
- split: test
path: college_mathematics/test-*
- split: validation
path: college_mathematics/validation-*
- split: dev
path: college_mathematics/dev-*
- config_name: college_medicine
data_files:
- split: test
path: college_medicine/test-*
- split: validation
path: college_medicine/validation-*
- split: dev
path: college_medicine/dev-*
- config_name: college_physics
data_files:
- split: test
path: college_physics/test-*
- split: validation
path: college_physics/validation-*
- split: dev
path: college_physics/dev-*
- config_name: computer_security
data_files:
- split: test
path: computer_security/test-*
- split: validation
path: computer_security/validation-*
- split: dev
path: computer_security/dev-*
- config_name: conceptual_physics
data_files:
- split: test
path: conceptual_physics/test-*
- split: validation
path: conceptual_physics/validation-*
- split: dev
path: conceptual_physics/dev-*
- config_name: econometrics
data_files:
- split: test
path: econometrics/test-*
- split: validation
path: econometrics/validation-*
- split: dev
path: econometrics/dev-*
- config_name: electrical_engineering
data_files:
- split: test
path: electrical_engineering/test-*
- split: validation
path: electrical_engineering/validation-*
- split: dev
path: electrical_engineering/dev-*
- config_name: elementary_mathematics
data_files:
- split: test
path: elementary_mathematics/test-*
- split: validation
path: elementary_mathematics/validation-*
- split: dev
path: elementary_mathematics/dev-*
- config_name: formal_logic
data_files:
- split: test
path: formal_logic/test-*
- split: validation
path: formal_logic/validation-*
- split: dev
path: formal_logic/dev-*
- config_name: global_facts
data_files:
- split: test
path: global_facts/test-*
- split: validation
path: global_facts/validation-*
- split: dev
path: global_facts/dev-*
- config_name: high_school_biology
data_files:
- split: test
path: high_school_biology/test-*
- split: validation
path: high_school_biology/validation-*
- split: dev
path: high_school_biology/dev-*
- config_name: high_school_chemistry
data_files:
- split: test
path: high_school_chemistry/test-*
- split: validation
path: high_school_chemistry/validation-*
- split: dev
path: high_school_chemistry/dev-*
- config_name: high_school_computer_science
data_files:
- split: test
path: high_school_computer_science/test-*
- split: validation
path: high_school_computer_science/validation-*
- split: dev
path: high_school_computer_science/dev-*
- config_name: high_school_european_history
data_files:
- split: test
path: high_school_european_history/test-*
- split: validation
path: high_school_european_history/validation-*
- split: dev
path: high_school_european_history/dev-*
- config_name: high_school_geography
data_files:
- split: test
path: high_school_geography/test-*
- split: validation
path: high_school_geography/validation-*
- split: dev
path: high_school_geography/dev-*
- config_name: high_school_government_and_politics
data_files:
- split: test
path: high_school_government_and_politics/test-*
- split: validation
path: high_school_government_and_politics/validation-*
- split: dev
path: high_school_government_and_politics/dev-*
- config_name: high_school_macroeconomics
data_files:
- split: test
path: high_school_macroeconomics/test-*
- split: validation
path: high_school_macroeconomics/validation-*
- split: dev
path: high_school_macroeconomics/dev-*
- config_name: high_school_mathematics
data_files:
- split: test
path: high_school_mathematics/test-*
- split: validation
path: high_school_mathematics/validation-*
- split: dev
path: high_school_mathematics/dev-*
- config_name: high_school_microeconomics
data_files:
- split: test
path: high_school_microeconomics/test-*
- split: validation
path: high_school_microeconomics/validation-*
- split: dev
path: high_school_microeconomics/dev-*
- config_name: high_school_physics
data_files:
- split: test
path: high_school_physics/test-*
- split: validation
path: high_school_physics/validation-*
- split: dev
path: high_school_physics/dev-*
- config_name: high_school_psychology
data_files:
- split: test
path: high_school_psychology/test-*
- split: validation
path: high_school_psychology/validation-*
- split: dev
path: high_school_psychology/dev-*
- config_name: high_school_statistics
data_files:
- split: test
path: high_school_statistics/test-*
- split: validation
path: high_school_statistics/validation-*
- split: dev
path: high_school_statistics/dev-*
- config_name: high_school_us_history
data_files:
- split: test
path: high_school_us_history/test-*
- split: validation
path: high_school_us_history/validation-*
- split: dev
path: high_school_us_history/dev-*
- config_name: high_school_world_history
data_files:
- split: test
path: high_school_world_history/test-*
- split: validation
path: high_school_world_history/validation-*
- split: dev
path: high_school_world_history/dev-*
- config_name: human_aging
data_files:
- split: test
path: human_aging/test-*
- split: validation
path: human_aging/validation-*
- split: dev
path: human_aging/dev-*
- config_name: human_sexuality
data_files:
- split: test
path: human_sexuality/test-*
- split: validation
path: human_sexuality/validation-*
- split: dev
path: human_sexuality/dev-*
- config_name: international_law
data_files:
- split: test
path: international_law/test-*
- split: validation
path: international_law/validation-*
- split: dev
path: international_law/dev-*
- config_name: jurisprudence
data_files:
- split: test
path: jurisprudence/test-*
- split: validation
path: jurisprudence/validation-*
- split: dev
path: jurisprudence/dev-*
- config_name: logical_fallacies
data_files:
- split: test
path: logical_fallacies/test-*
- split: validation
path: logical_fallacies/validation-*
- split: dev
path: logical_fallacies/dev-*
- config_name: machine_learning
data_files:
- split: test
path: machine_learning/test-*
- split: validation
path: machine_learning/validation-*
- split: dev
path: machine_learning/dev-*
- config_name: management
data_files:
- split: test
path: management/test-*
- split: validation
path: management/validation-*
- split: dev
path: management/dev-*
- config_name: marketing
data_files:
- split: test
path: marketing/test-*
- split: validation
path: marketing/validation-*
- split: dev
path: marketing/dev-*
- config_name: medical_genetics
data_files:
- split: test
path: medical_genetics/test-*
- split: validation
path: medical_genetics/validation-*
- split: dev
path: medical_genetics/dev-*
- config_name: miscellaneous
data_files:
- split: test
path: miscellaneous/test-*
- split: validation
path: miscellaneous/validation-*
- split: dev
path: miscellaneous/dev-*
- config_name: moral_disputes
data_files:
- split: test
path: moral_disputes/test-*
- split: validation
path: moral_disputes/validation-*
- split: dev
path: moral_disputes/dev-*
- config_name: moral_scenarios
data_files:
- split: test
path: moral_scenarios/test-*
- split: validation
path: moral_scenarios/validation-*
- split: dev
path: moral_scenarios/dev-*
- config_name: nutrition
data_files:
- split: test
path: nutrition/test-*
- split: validation
path: nutrition/validation-*
- split: dev
path: nutrition/dev-*
- config_name: philosophy
data_files:
- split: test
path: philosophy/test-*
- split: validation
path: philosophy/validation-*
- split: dev
path: philosophy/dev-*
- config_name: prehistory
data_files:
- split: test
path: prehistory/test-*
- split: validation
path: prehistory/validation-*
- split: dev
path: prehistory/dev-*
- config_name: professional_accounting
data_files:
- split: test
path: professional_accounting/test-*
- split: validation
path: professional_accounting/validation-*
- split: dev
path: professional_accounting/dev-*
- config_name: professional_law
data_files:
- split: test
path: professional_law/test-*
- split: validation
path: professional_law/validation-*
- split: dev
path: professional_law/dev-*
- config_name: professional_medicine
data_files:
- split: test
path: professional_medicine/test-*
- split: validation
path: professional_medicine/validation-*
- split: dev
path: professional_medicine/dev-*
- config_name: professional_psychology
data_files:
- split: test
path: professional_psychology/test-*
- split: validation
path: professional_psychology/validation-*
- split: dev
path: professional_psychology/dev-*
- config_name: public_relations
data_files:
- split: test
path: public_relations/test-*
- split: validation
path: public_relations/validation-*
- split: dev
path: public_relations/dev-*
- config_name: security_studies
data_files:
- split: test
path: security_studies/test-*
- split: validation
path: security_studies/validation-*
- split: dev
path: security_studies/dev-*
- config_name: sociology
data_files:
- split: test
path: sociology/test-*
- split: validation
path: sociology/validation-*
- split: dev
path: sociology/dev-*
- config_name: us_foreign_policy
data_files:
- split: test
path: us_foreign_policy/test-*
- split: validation
path: us_foreign_policy/validation-*
- split: dev
path: us_foreign_policy/dev-*
- config_name: virology
data_files:
- split: test
path: virology/test-*
- split: validation
path: virology/validation-*
- split: dev
path: virology/dev-*
- config_name: world_religions
data_files:
- split: test
path: world_religions/test-*
- split: validation
path: world_religions/validation-*
- split: dev
path: world_religions/dev-*
---
# Dataset Card for MMLU Greek
The MMLU Greek dataset is a set of 15858 examples from the MMLU dataset [available from here and here], machine-translated into Greek. The original dataset consists of multiple-choice questions from 57 tasks including elementary mathematics, US history, computer science, law, etc.
## Dataset Details
### Dataset Description
- **Curated by:** ILSP/Athena RC
- **Language(s) (NLP):** el
- **License:** cc-by-nc-sa-4.0
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This dataset is the result of machine translation.
## Dataset Card Contact
https://www.athenarc.gr/en/ilsp
|
bigcode/bigcodebench-hard | bigcode | "2025-02-23T16:42:46Z" | 10,416 | 2 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-14T14:50:33Z" | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: complete_prompt
dtype: string
- name: instruct_prompt
dtype: string
- name: canonical_solution
dtype: string
- name: code_prompt
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
- name: doc_struct
dtype: string
- name: libs
dtype: string
- name: q_idx
dtype: int64
- name: question
dtype: string
- name: score
dtype: float64
- name: _id
dtype: string
splits:
- name: v0.1.0_hf
num_bytes: 1271624
num_examples: 148
- name: v0.1.1
num_bytes: 1271607
num_examples: 148
- name: v0.1.2
num_bytes: 1271812
num_examples: 148
- name: v0.1.3
num_bytes: 1271812
num_examples: 148
- name: v0.1.4
num_bytes: 1272012
num_examples: 148
download_size: 2758366
dataset_size: 6358867
configs:
- config_name: default
data_files:
- split: v0.1.0_hf
path: data/v0.1.0_hf-*
- split: v0.1.1
path: data/v0.1.1-*
- split: v0.1.2
path: data/v0.1.2-*
- split: v0.1.3
path: data/v0.1.3-*
- split: v0.1.4
path: data/v0.1.4-*
---
|
AVS-Net/knee_fast_mri | AVS-Net | "2023-08-25T11:30:20Z" | 10,377 | 1 | [
"license:afl-3.0",
"size_categories:100M<n<1B",
"region:us",
"medical"
] | null | "2023-08-12T01:09:50Z" | ---
license: afl-3.0
tags:
- medical
size_categories:
- 100M<n<1B
---
# Dataset for AVS-Net Pre-training
The dataset utilized in the pre-training of the AVS-Net: Attention-based Variable Splitting Network for P-MRI Acceleration model, developed by Y Zhang, J Li, Z Wang, J Duan, and J Li, incorporates data from five distinct protocol sequences. These are:
- (coronal_pd)Coronal Spin Density-weighted without Fat Suppression
- (coronal_pd_fs)Coronal Spin Density-weighted with Fat Suppression
- (sagittal_pd)Sagittal Spin Density-weighted
- (sagittal_t2)Sagittal T2-weighted with Fat Suppression
- (axial_t2)Axial T2-weighted with Fat Suppression
The dataset is structured on a slice-by-slice basis, with each slice containing 20 cases. Each case is comprised of two files: rawdata*.mat and espirit*.mat. The dataset's structure can be outlined as follows:
## Dataset architecture:
- name: /rds/projects/d/duanj-ai-in-medical-imaging/knee_fast_mri
- Protocol: [coronal_pd, coronal_pd_fs, sagittal_pd, sagittal_t2, axial_t2]
Approximately 40 slices per protocol, each slice containing 15 channels, with a height and width (HW) of (640, 368)
```
knee_nyu
- axial_t2 coronal_pd(X) coronal_pd_fs sagittal_pd sagittal_t2
| | | | |
- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [11, 12, 13, 14, 15, 16, 17, 18, 19, 20] masks
| |
- [train] [val]
| |
- espirit*.mat(1-40), rawdata*.mat(1-40) *_masks.mat
```
In this structure, each protocol has approximately 40 slices, each consisting of 15 channels. The dimensions of the data are 640x368 (height x width). For each protocol, the slices are further divided into two groups: the training set ([train]) and the validation set ([val]). The training set includes the espirit*.mat and rawdata*.mat files for each slice, while the validation set contains *_masks.mat files.
## Dataset Usage
> For a standalone knee dataset download, use `git lfs`(<https://git-lfs.com/>) to download from the `huggingface` datasets(<https://huggingface.co/datasets/AVS-Net/knee_fast_mri>):
```bash
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone -j8 [email protected]:datasets/AVS-Net/knee_fast_mri
```
## Known Issues and Resolutions
- 1. Network Connection Issue
For enhanced network connection quality, it is recommended to employ the `ssh` protocol instead of `https`.
```bash
# Rather than utilizing `https://huggingface.co/datasets/AVS-Net/knee_fast_mri`
# Clone the repository using `[email protected]:datasets/AVS-Net/knee_fast_mri`
# As an example:
git clone -j8 [email protected]:datasets/AVS-Net/knee_fast_mri
```
- 2. Interruptions During Download
Certain error messages may appear during the download process due to interruptions. These errors can include:
```
error: ... : cannot add to the index - missing --add option?
batch response: Post ... : read: connection reset by peer
error: failed to fetch some objects from 'https://hf.co/datasets/AVS-Net/knee_fast_mri.git/info/lfs'
```
Following the instructions below allows for the handling of these interruptions.
```bash
# Navigate (`cd`) to the directory containing the `lfs` folder
# Intead of using `git pull`,
# Use `git lfs pull` to resume the download progress for `lfs` projects
git lfs pull
```
Please note that this process will resume the download from where it was interrupted, thereby ensuring the integrity of your downloaded data.
|
mesolitica/mixtral-magicoder | mesolitica | "2024-09-30T15:33:24Z" | 10,359 | 2 | [
"language:en",
"language:ms",
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"conversational"
] | "2024-01-11T00:04:30Z" | ---
license: mit
task_categories:
- conversational
language:
- en
- ms
---
# Mixtral Magicoder: Source Code Is All You Need on various programming languages
We sampled programming languages from https://huggingface.co/datasets/bigcode/the-stack-dedup and pushed to https://huggingface.co/datasets/malaysia-ai/starcoderdata-sample
After that, we use [Magicoder: Source Code Is All You Need on various programming languages](https://github.com/ise-uiuc/magicoder) template, we target at least 10k rows for each programming languages.
1. C++, 10747 rows
2. C#, 10193 rows
3. CUDA, 13843 rows
4. Dockerfile, 13286 rows
5. Go, 10143 rows
6. Java, 11221 rows
7. JavaScript, 11758 rows
8. Kotlin, 12790 rows
9. PHP, 10176 rows
10. Python, other than `pandas` and `sklearn` and `matplotlib` and `plotly`, 10925 rows
11. Python, must have `pandas` or `sklearn` or `matplotlib` or `plotly`, focused on data analytics, 53959 rows
12. Ruby, 10201 rows
13. Rust, 10271 rows
14. Scala, 10017 rows
15. Shell, 10848 rows
16. SQL, 27668 rows
17. Swift, 10187 rows
18. TypeScript, 14248 rows
Source code at https://github.com/mesolitica/malaysian-dataset/tree/master/chatbot/mixtral-magicoder
## precaution
1. There is no validation for the output generated.
2. Always filter short answers.
## Filtered version
1. Dropped short answers.
2. Dropped contain `code snippet`.
Uploaded at [postfilter.jsonl](postfilter.jsonl).
## Infrastructure specification
1. 5x of 4x A100s, NC96ads A100 v4, spot instance, total run is ~48 hours, 48 * 1.954 (US East, https://instances.vantage.sh/azure/vm/nc96ads-v4) * 5 ~= 376 USD.
2. HuggingFace Text Inference Engine. |
gsarti/flores_101 | gsarti | "2022-10-27T08:37:36Z" | 10,352 | 26 | [
"task_categories:text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"source_datasets:extended|flores",
"language:af",
"language:am",
"language:ar",
"language:hy",
"language:as",
"language:ast",
"language:az",
"language:be",
"language:bn",
"language:bs",
"language:bg",
"language:my",
"language:ca",
"language:ceb",
"language:zho",
"language:hr",
"language:cs",
"language:da",
"language:nl",
"language:en",
"language:et",
"language:tl",
"language:fi",
"language:fr",
"language:ff",
"language:gl",
"language:lg",
"language:ka",
"language:de",
"language:el",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hu",
"language:is",
"language:ig",
"language:id",
"language:ga",
"language:it",
"language:ja",
"language:jv",
"language:kea",
"language:kam",
"language:kn",
"language:kk",
"language:km",
"language:ko",
"language:ky",
"language:lo",
"language:lv",
"language:ln",
"language:lt",
"language:luo",
"language:lb",
"language:mk",
"language:ms",
"language:ml",
"language:mt",
"language:mi",
"language:mr",
"language:mn",
"language:ne",
"language:ns",
"language:no",
"language:ny",
"language:oc",
"language:or",
"language:om",
"language:ps",
"language:fa",
"language:pl",
"language:pt",
"language:pa",
"language:ro",
"language:ru",
"language:sr",
"language:sn",
"language:sd",
"language:sk",
"language:sl",
"language:so",
"language:ku",
"language:es",
"language:sw",
"language:sv",
"language:tg",
"language:ta",
"language:te",
"language:th",
"language:tr",
"language:uk",
"language:umb",
"language:ur",
"language:uz",
"language:vi",
"language:cy",
"language:wo",
"language:xh",
"language:yo",
"language:zu",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"modality:tabular",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2106.03193",
"region:us",
"conditional-text-generation"
] | [
"text-generation",
"translation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- expert-generated
language:
- af
- am
- ar
- hy
- as
- ast
- az
- be
- bn
- bs
- bg
- my
- ca
- ceb
- zho
- hr
- cs
- da
- nl
- en
- et
- tl
- fi
- fr
- ff
- gl
- lg
- ka
- de
- el
- gu
- ha
- he
- hi
- hu
- is
- ig
- id
- ga
- it
- ja
- jv
- kea
- kam
- kn
- kk
- km
- ko
- ky
- lo
- lv
- ln
- lt
- luo
- lb
- mk
- ms
- ml
- mt
- mi
- mr
- mn
- ne
- ns
- 'no'
- ny
- oc
- or
- om
- ps
- fa
- pl
- pt
- pa
- ro
- ru
- sr
- sn
- sd
- sk
- sl
- so
- ku
- es
- sw
- sv
- tg
- ta
- te
- th
- tr
- uk
- umb
- ur
- uz
- vi
- cy
- wo
- xh
- yo
- zu
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
- translation
size_categories:
- unknown
source_datasets:
- extended|flores
task_categories:
- text-generation
- translation
task_ids: []
paperswithcode_id: flores
pretty_name: flores101
tags:
- conditional-text-generation
---
# Dataset Card for Flores 101
## Table of Contents
- [Dataset Card for Flores 101](#dataset-card-for-flores-101)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Home:** [WMT](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html)
- **Repository:** [Github](https://github.com/facebookresearch/flores)
- **Blogpost:** [FAIR](https://ai.facebook.com/blog/the-flores-101-data-set-helping-build-better-translation-systems-around-the-world)
- **Paper:** [Arxiv](https://arxiv.org/abs/2106.03193)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Leaderboard** [Dynabench](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL))
### Dataset Summary
FLORES is a benchmark dataset for machine translation between English and low-resource languages.
Abstract from the original paper:
> One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
**Disclaimer**: *The Flores-101 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).
### Supported Tasks and Leaderboards
#### Multilingual Machine Translation
Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html).
### Languages
The dataset contains parallel sentences for 101 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) as in the original dataset.
**New:** Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command.
## Dataset Structure
### Data Instances
A sample from the `dev` split for the Russian language (`rus` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
'id': 1,
'sentence': 'В понедельник ученые из Медицинской школы Стэнфордского университета объявили об изобретении нового диагностического инструмента, который может сортировать клетки по их типу; это маленький чип, который можно напечатать, используя стандартный струйный принтер примерно за 1 цент США.',
'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet',
'domain': 'wikinews',
'topic': 'health',
'has_image': 0,
'has_hyperlink': 0
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `id`: Row number for the data entry, starting at 1.
- `sentence`: The full sentence in the specific language.
- `URL`: The URL for the English article from which the sentence was extracted.
- `domain`: The domain of the sentence.
- `topic`: The topic of the sentence.
- `has_image`: Whether the original article contains an image.
- `has_hyperlink`: Whether the sentence contains a hyperlink.
### Data Splits
| config| `dev`| `devtest`|
|-----------------:|-----:|---------:|
|all configurations| 997| 1012:|
### Dataset Creation
Please refer to the original article [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of FLORES-101 are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [[email protected]](mailto:[email protected]).
### Licensing Information
Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@inproceedings{flores101,
title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela},
journal={arXiv preprint arXiv:2106.03193},
year={2021}
}
``` |
PromptEval/PromptEval_MMLU_full | PromptEval | "2024-06-07T05:40:35Z" | 10,333 | 3 | [
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2405.17202",
"region:us"
] | [
"question-answering"
] | "2024-06-04T02:04:07Z" | ---
language:
- en
license: mit
task_categories:
- question-answering
pretty_name: MMLU_PromptEval_full
dataset_info:
- config_name: format_0
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967594
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40965182
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729214
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728930
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40820070
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827213
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828810
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54217882
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50624184
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157447067
dataset_size: 635714527
- config_name: format_104
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711864
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41711812
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 42245461
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 42133203
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 42133166
num_examples: 14042
- name: google_flan_ul2
num_bytes: 42133151
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 42231264
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571413
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41571963
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55994487
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49139088
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 42231421
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 42245466
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 42231422
num_examples: 14042
download_size: 157480740
dataset_size: 650997049
- config_name: format_110
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40279584
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40279558
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40279548
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223388
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998898
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998748
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40201992
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40223212
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40221924
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55066171
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45424454
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223399
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223416
num_examples: 14042
download_size: 155330846
dataset_size: 622866442
- config_name: format_111
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40953598
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40953548
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40953434
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223388
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998783
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998744
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998745
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40210433
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40897140
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40894517
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55127411
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47099180
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223409
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223369
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223403
num_examples: 14042
download_size: 156101239
dataset_size: 627979102
- config_name: format_112
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40279584
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40279542
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40279442
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223363
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39999032
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998746
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40192596
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40223215
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40221355
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55132374
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46449371
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223381
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223416
num_examples: 14042
download_size: 155526690
dataset_size: 623947567
- config_name: format_113
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40279584
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40279532
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40279564
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40897385
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40673105
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40672763
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40672761
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40872076
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40223209
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40221324
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55388115
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47220821
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40897425
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40897379
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40897419
num_examples: 14042
download_size: 156390863
dataset_size: 630372462
- config_name: format_120
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560415
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560398
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560300
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40897385
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40673160
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40672761
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40672762
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40874904
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504135
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503418
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55380840
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46797900
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40897425
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40897383
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40897422
num_examples: 14042
download_size: 156216254
dataset_size: 631350608
- config_name: format_122
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335706
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335338
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279541
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40054957
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054913
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054915
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40268648
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279354
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278615
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55135251
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 40505457
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40280168
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279574
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 153994844
dataset_size: 618757763
- config_name: format_123
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560350
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40556619
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504219
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279629
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279585
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279587
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40484015
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504029
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502461
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55220346
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44761658
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504207
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155285821
dataset_size: 626005630
- config_name: format_124
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560369
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560404
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504219
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279630
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279585
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279587
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40483970
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504085
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503258
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55215732
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44726090
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504207
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155345465
dataset_size: 625970072
- config_name: format_128
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785085
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40785030
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40784770
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728884
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504276
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504257
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504259
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40688280
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728660
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40727455
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54720939
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 42252429
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728949
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728911
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728918
num_examples: 14042
download_size: 155001760
dataset_size: 625901102
- config_name: format_132
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40559935
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40558382
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504228
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279635
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279586
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40493203
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503859
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40500771
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55052749
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44164542
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504180
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155238440
dataset_size: 625249569
- config_name: format_133
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560309
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560176
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728919
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504279
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504256
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504284
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40716064
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503997
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502733
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55231757
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46323040
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728918
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728892
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728928
num_examples: 14042
download_size: 155738281
dataset_size: 629386965
- config_name: format_138
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785085
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40784996
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40784820
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223388
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998795
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998745
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40206142
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728481
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40726774
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55064973
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44904634
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223409
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223352
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223416
num_examples: 14042
download_size: 155377726
dataset_size: 624875754
- config_name: format_140
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560373
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560227
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504218
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279779
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279588
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40499008
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504156
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502413
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54815818
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41795939
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504235
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 154778755
dataset_size: 622654264
- config_name: format_141
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335635
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335487
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504212
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279788
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279588
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40483553
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279453
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277138
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54647069
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41297784
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504239
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504243
num_examples: 14042
download_size: 154461518
dataset_size: 620847771
- config_name: format_144
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785084
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40785015
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40784999
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728919
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504277
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504273
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504279
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40717537
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728674
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40727846
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55224114
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45996610
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728918
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728904
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728928
num_examples: 14042
download_size: 155793162
dataset_size: 630178377
- config_name: format_147
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335678
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335677
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223379
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998947
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998748
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40204329
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279406
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278098
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55035624
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45279928
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223399
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223416
num_examples: 14042
download_size: 155417725
dataset_size: 622974531
- config_name: format_148
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40279584
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40279497
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40279503
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504228
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279626
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279610
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40497655
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40223239
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40221976
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55099634
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45737135
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504198
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504246
num_examples: 14042
download_size: 155450931
dataset_size: 625473961
- config_name: format_149
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560344
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560304
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728891
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504360
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504258
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504256
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40717072
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504051
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503067
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54834472
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41379735
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728899
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728911
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728918
num_examples: 14042
download_size: 154676676
dataset_size: 624047962
- config_name: format_154
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560309
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40558799
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504216
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279773
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279588
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40492814
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503961
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40501498
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55232920
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44742140
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504235
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155486617
dataset_size: 626008762
- config_name: format_155
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560364
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560347
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504216
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279783
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279588
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40492751
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504001
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502618
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55210353
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44731872
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504235
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155421319
dataset_size: 625978648
- config_name: format_158
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335687
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335707
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728891
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504337
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504257
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504284
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40708783
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279337
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278135
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55287435
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45598527
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728918
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728904
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728928
num_examples: 14042
download_size: 155618408
dataset_size: 627587882
- config_name: format_16
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967593
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966365
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998773
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998746
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40096278
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827249
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40830025
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52729917
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49578812
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097038
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097037
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 156150163
dataset_size: 628078470
- config_name: format_161
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40111080
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40111026
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40110644
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728887
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504418
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504263
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504256
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40705547
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40054739
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40053758
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54828017
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41605522
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728912
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728911
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728918
num_examples: 14042
download_size: 154606109
dataset_size: 622008898
- config_name: format_162
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560418
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560399
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560409
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279556
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40054964
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054912
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054914
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40268507
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504127
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40501945
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54972493
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 40470996
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40280064
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279561
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 154102815
dataset_size: 619682839
- config_name: format_163
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335741
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335734
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504200
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279635
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279584
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279586
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40457977
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279439
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277704
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54838336
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41711454
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504276
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504226
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 154434821
dataset_size: 621427900
- config_name: format_166
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560412
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560368
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560405
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40728892
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40504284
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40504274
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40504289
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40685090
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504049
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503191
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55264667
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46358311
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40728928
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40728879
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40728928
num_examples: 14042
download_size: 155841492
dataset_size: 629424967
- config_name: format_169
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335741
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335724
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335748
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279556
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40054965
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054912
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054914
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40263801
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279405
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277972
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55045662
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46792988
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279584
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279528
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 155797636
dataset_size: 624950074
- config_name: format_170
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560369
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560398
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279556
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40054965
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054912
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054914
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40263756
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503989
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503292
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55057031
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46797857
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279584
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279528
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 155862471
dataset_size: 626090149
- config_name: format_171
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560371
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560342
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504238
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279598
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279603
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279592
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40488262
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504022
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40503263
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55385449
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47296473
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504245
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504235
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504246
num_examples: 14042
download_size: 156052645
dataset_size: 628714352
- config_name: format_181
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40111080
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40111001
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40110559
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279550
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055185
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054913
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054919
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40273475
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40054673
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40053461
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55103221
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 41509369
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279568
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279567
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279584
num_examples: 14042
download_size: 154178164
dataset_size: 618610125
- config_name: format_182
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335724
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40334745
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279533
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055183
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054913
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054919
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40263839
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279455
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278146
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55051777
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46615573
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279584
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279540
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 155748431
dataset_size: 624778257
- config_name: format_183
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335739
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335445
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279533
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055180
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054913
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054919
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40263893
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279402
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278633
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55070331
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46604294
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279584
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279540
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279574
num_examples: 14042
download_size: 155852388
dataset_size: 624786732
- config_name: format_19
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223416
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223376
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40222650
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40771052
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40673111
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40672763
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40765930
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097011
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097145
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53156206
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 51270764
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40771061
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40771053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40771054
num_examples: 14042
download_size: 156761207
dataset_size: 630485336
- config_name: format_190
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785085
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40784967
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40784555
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41178233
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40954007
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40953605
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40953600
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41162221
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728514
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40727351
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55109317
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 42375126
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41178256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41178237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41178260
num_examples: 14042
download_size: 155377523
dataset_size: 630031334
- config_name: format_197
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41459100
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41459039
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41458996
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504219
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279791
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279587
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40496016
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41402553
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41401531
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54846834
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 40714502
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504246
num_examples: 14042
download_size: 154868007
dataset_size: 626094481
- config_name: format_20
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223416
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223397
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40223321
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998904
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998746
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40092467
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097025
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097395
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52838355
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50109373
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097045
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097038
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 155991760
dataset_size: 624289301
- config_name: format_200
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41234429
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41234318
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41234380
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504219
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279790
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279587
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40487316
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41177769
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41176507
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55272934
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 43567817
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504246
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 155384819
dataset_size: 628241389
- config_name: format_204
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335741
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335678
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335718
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504210
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279665
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279603
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40500497
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279371
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40278188
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55101979
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44086901
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504245
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504224
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504253
num_examples: 14042
download_size: 155347670
dataset_size: 624109857
- config_name: format_207
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40785096
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40785026
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40785068
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40504221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40279714
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40279605
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40279584
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40501997
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40728579
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40727946
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54799337
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 40770309
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40504256
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40504239
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40504256
num_examples: 14042
download_size: 154682060
dataset_size: 622739233
- config_name: format_214
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560338
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560415
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279547
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055044
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054933
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054912
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40275417
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504083
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40501348
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55005719
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 43167600
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279574
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279543
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279584
num_examples: 14042
download_size: 154813848
dataset_size: 622418470
- config_name: format_215
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335648
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335713
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40223386
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998879
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998765
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40219474
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279483
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40276724
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55071274
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 43498892
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40223409
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40223375
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40223413
num_examples: 14042
download_size: 154883189
dataset_size: 621242931
- config_name: format_222
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560342
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560397
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40279547
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40055059
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40054912
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40054912
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40267272
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503990
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502688
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54979129
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 44808884
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40279577
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40279535
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40279584
num_examples: 14042
download_size: 155181948
dataset_size: 624026252
- config_name: format_226
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335741
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335610
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335625
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40054875
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39830468
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39830256
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39830240
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40048967
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279501
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277282
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55366016
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45574600
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40054905
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40054872
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40054909
num_examples: 14042
download_size: 155107838
dataset_size: 622263867
- config_name: format_227
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560413
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560299
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560415
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728970
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728929
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728954
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40820530
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40504048
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40500823
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54858804
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47635565
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827223
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156196838
dataset_size: 631496637
- config_name: format_229
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40335752
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40335700
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40335721
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728976
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728931
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728951
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40820046
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40279424
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40277490
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54882233
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47430267
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827230
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827225
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 155945133
dataset_size: 630192388
- config_name: format_230
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40560424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40560347
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40560416
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827226
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729064
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728950
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40819455
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40503983
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40502990
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54846909
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47028153
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827230
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156108867
dataset_size: 630878522
- config_name: format_241
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967580
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967576
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728931
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728948
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728929
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40819341
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828260
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54809574
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47497186
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827220
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156531566
dataset_size: 633180077
- config_name: format_243
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967484
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967540
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827223
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729042
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728944
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40806295
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827255
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828696
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54807421
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47380233
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827224
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156334066
dataset_size: 633048362
- config_name: format_244
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967477
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967432
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827223
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729045
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728944
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40806333
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827210
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827697
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54815649
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47338086
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827224
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156299879
dataset_size: 633013325
- config_name: format_248
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967458
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967528
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827220
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728935
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728959
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728951
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40821376
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827179
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827979
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54821317
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46330645
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827220
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827224
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156174973
dataset_size: 632026846
- config_name: format_249
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967578
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967576
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729015
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728941
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728953
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40824052
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827198
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828135
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54869324
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 45946187
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827223
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827227
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 155860268
dataset_size: 631693493
- config_name: format_250
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967576
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966956
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459144
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41550475
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827232
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827773
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52621559
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48747792
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156673294
dataset_size: 638084009
- config_name: format_252
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967515
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967621
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459251
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459118
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459136
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41551698
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827838
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52553278
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49069083
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156671369
dataset_size: 638339014
- config_name: format_258
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585489
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585526
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585074
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557405
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459196
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459116
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459112
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41555587
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557123
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557956
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52607709
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48936305
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557404
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157152504
dataset_size: 641577813
- config_name: format_260
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585488
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585495
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585195
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557404
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459366
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459116
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459114
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41549332
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557331
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557670
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52473012
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49288734
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557404
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157111725
dataset_size: 641789472
- config_name: format_261
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585505
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41584866
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557405
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459280
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459117
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459113
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41547670
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557251
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41558126
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52491200
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48118468
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557404
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156635106
dataset_size: 640635706
- config_name: format_266
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585503
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41584332
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557406
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459235
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459136
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41547264
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41556916
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557941
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52440260
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49416673
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157219169
dataset_size: 641881486
- config_name: format_267
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585507
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585218
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459142
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41548789
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557242
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41558151
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52632899
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48520000
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156907097
dataset_size: 641180306
- config_name: format_268
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585508
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41584666
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557405
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459197
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459116
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41554526
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557188
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557831
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52619753
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48786218
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557404
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557405
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157014184
dataset_size: 641438248
- config_name: format_272
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585488
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585569
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585044
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459258
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459113
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459141
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41546367
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557197
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557528
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52527273
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49446458
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557403
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557410
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 157186175
dataset_size: 641998058
- config_name: format_276
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585490
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585506
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585287
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41557403
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459149
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41541041
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557331
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41557638
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52398677
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 46474320
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41557406
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557405
num_examples: 14042
download_size: 156317502
dataset_size: 638892308
- config_name: format_278
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41585488
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41585495
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41585479
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437043
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269029
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268564
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268594
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39428727
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41557350
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41558432
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53749048
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48343404
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437055
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479181
num_examples: 14042
download_size: 156089087
dataset_size: 625032070
- config_name: format_280
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521254
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521270
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437043
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269119
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268591
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268560
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39426148
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436958
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39437040
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53786048
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48655825
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479190
num_examples: 14042
download_size: 155218585
dataset_size: 614944596
- config_name: format_282
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521287
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521066
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39268982
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268577
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268560
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39429861
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436979
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39436996
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53733612
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49208119
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479176
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479188
num_examples: 14042
download_size: 154983797
dataset_size: 615447809
- config_name: format_286
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521294
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521220
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40111054
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39943292
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39942590
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39942576
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40102116
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436940
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39436973
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54012443
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48348338
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40153197
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40111071
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40153197
num_examples: 14042
download_size: 155541428
dataset_size: 620257617
- config_name: format_290
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40195322
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40195299
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40195210
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437004
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39268610
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268561
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268560
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39426913
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40110927
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40110403
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53702988
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49522004
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437013
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479181
num_examples: 14042
download_size: 155939176
dataset_size: 619097176
- config_name: format_294
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521214
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521228
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269048
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268564
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268594
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39427545
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436994
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39436974
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53748822
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48227693
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437055
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479181
num_examples: 14042
download_size: 155372440
dataset_size: 614480446
- config_name: format_296
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521297
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521237
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437038
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269143
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268591
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268560
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39424660
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436991
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39437020
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53752978
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48491101
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479181
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479190
num_examples: 14042
download_size: 155469369
dataset_size: 614745356
- config_name: format_298
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521316
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521303
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521306
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39436967
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39269018
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268495
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268489
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39421641
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436971
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39437067
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53746927
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48229488
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479102
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39436984
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479110
num_examples: 14042
download_size: 155210102
dataset_size: 614474184
- config_name: format_300
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521306
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521289
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521312
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39268955
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268580
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268594
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39428411
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39436959
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39437067
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53722102
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49081947
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479176
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479188
num_examples: 14042
download_size: 155173472
dataset_size: 615308976
- config_name: format_301
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 39521310
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 39521202
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 39521290
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 39437037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39268957
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39268580
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39268594
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 39428394
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 39437003
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 39436914
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53712395
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49074158
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 39479176
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 39437053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 39479188
num_examples: 14042
download_size: 155242652
dataset_size: 615291251
- config_name: format_31
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223403
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223365
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40223357
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097046
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998938
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998748
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40096005
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097033
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097328
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52772637
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50530416
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097045
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097037
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 156112528
dataset_size: 624648140
- config_name: format_32
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223408
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223396
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40222124
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097039
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998820
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998756
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998746
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40093553
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097028
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097260
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52703808
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50189099
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097045
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097037
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 155960937
dataset_size: 624234157
- config_name: format_35
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40223416
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40223394
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40222156
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40771052
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40673167
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40672765
num_examples: 14042
- name: google_flan_ul2
num_bytes: 39998744
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40765510
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40097039
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40097537
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53155607
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 51057720
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40771061
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40771053
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40771054
num_examples: 14042
download_size: 156823425
dataset_size: 630271275
- config_name: format_37
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40897424
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40897404
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40897399
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40097037
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 39998882
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 39998765
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40672760
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40095486
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40770891
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40771096
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52706665
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47225312
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40097038
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40097037
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40097038
num_examples: 14042
download_size: 155578440
dataset_size: 625320234
- config_name: format_41
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967616
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40964021
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827220
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729219
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728930
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40813632
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827203
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827908
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52835600
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50305314
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157144831
dataset_size: 634004889
- config_name: format_42
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967608
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40965724
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41501233
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41403296
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41402947
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41481867
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827132
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827633
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53294500
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 51616859
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41501237
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41501237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41501238
num_examples: 14042
download_size: 157902456
dataset_size: 640489073
- config_name: format_45
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967615
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967466
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827220
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728973
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728930
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40824047
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827159
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828122
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52854425
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49134195
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156683795
dataset_size: 632866378
- config_name: format_46
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967626
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40964843
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729118
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40824043
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827200
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827877
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52843273
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50742545
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157184736
dataset_size: 634460910
- config_name: format_47
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967602
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40964244
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728976
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40821049
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827217
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828044
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52830096
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50034844
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827228
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156768791
dataset_size: 633736455
- config_name: format_48
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967626
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40965883
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827237
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728999
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728940
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40814951
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827127
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827501
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52797321
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49124578
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156423316
dataset_size: 632788388
- config_name: format_50
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967608
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40965053
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729197
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728929
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728942
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40823139
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827142
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828113
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52832630
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 50782086
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157292666
dataset_size: 634489366
- config_name: format_51
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967626
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967554
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41501236
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41403334
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41402945
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728931
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41488202
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827120
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827676
num_examples: 14042
- name: google_gemma_7b
num_bytes: 53297124
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 51888375
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41501237
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41501237
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41501238
num_examples: 14042
download_size: 157881411
dataset_size: 640771477
- config_name: format_55
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967617
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966403
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728974
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728929
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40816280
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827181
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827724
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52886455
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49439471
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827229
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827222
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156608340
dataset_size: 633194490
- config_name: format_59
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967591
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40962196
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827220
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729126
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40821990
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828203
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52829191
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49200261
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156783723
dataset_size: 632900158
- config_name: format_63
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967634
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967575
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966970
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827237
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728979
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728943
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728929
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40811438
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827202
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40827480
num_examples: 14042
- name: google_gemma_7b
num_bytes: 52804595
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49155556
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827221
num_examples: 14042
download_size: 156798997
dataset_size: 632824202
- config_name: format_66
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 42090994
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 42090819
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 42086874
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729082
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40818787
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41950602
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41951673
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54020672
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47885447
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827220
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827225
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156732462
dataset_size: 638391704
- config_name: format_7
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967633
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967597
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40967127
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729059
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728941
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40826989
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827192
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40829187
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54249060
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48336490
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827229
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156652817
dataset_size: 633467097
- config_name: format_71
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967593
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966936
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40728976
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728931
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40822243
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827216
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828611
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54059975
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48042961
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827232
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 155891085
dataset_size: 632978915
- config_name: format_72
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967597
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966887
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729260
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728930
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728942
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40819094
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827234
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828358
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54073109
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49101220
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827228
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156305494
dataset_size: 634047171
- config_name: format_75
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967593
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966897
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729130
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40822921
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827246
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40828285
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54010703
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48818046
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827223
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156391967
dataset_size: 633705212
- config_name: format_76
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 40967642
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 40967603
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 40966778
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827227
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729131
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728928
num_examples: 14042
- name: google_flan_ul2
num_bytes: 40728928
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40822897
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 40827243
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 40829102
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54013742
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48806179
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827221
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827223
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 156366606
dataset_size: 633697066
- config_name: format_8
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41641650
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41641616
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41640764
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 40827221
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 40729128
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 40728932
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41402946
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 40826908
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41501154
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41502438
num_examples: 14042
- name: google_gemma_7b
num_bytes: 54221501
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 49374844
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 40827222
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 40827221
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 40827222
num_examples: 14042
download_size: 157372278
dataset_size: 638520767
- config_name: format_87
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711859
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41711216
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571444
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459147
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459115
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41552744
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571417
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41572013
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55643989
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48156730
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571449
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156751177
dataset_size: 644266937
- config_name: format_94
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711858
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41711456
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571447
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459145
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459130
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459138
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41552371
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571419
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41571948
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55543358
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 48424108
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571453
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156876768
dataset_size: 644433511
- config_name: format_95
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711783
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41710165
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571444
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459157
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459113
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459134
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41560687
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571393
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41572124
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55572418
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47906478
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557406
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571449
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156838847
dataset_size: 643952025
- config_name: format_96
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711805
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41710979
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571447
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459116
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459113
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459137
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41566175
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571433
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41571736
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55609065
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47476186
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571448
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156737430
dataset_size: 643564319
- config_name: format_97
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: example
dtype: int32
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
- name: input_formatted
dtype: string
- name: model_output
dtype: string
- name: correctness
dtype: int8
splits:
- name: meta_llama_llama_3_8b
num_bytes: 41711868
num_examples: 14042
- name: meta_llama_llama_3_8b_instruct
num_bytes: 41711860
num_examples: 14042
- name: meta_llama_llama_3_70b_instruct
num_bytes: 41711335
num_examples: 14042
- name: codellama_codellama_34b_instruct
num_bytes: 41571445
num_examples: 14042
- name: google_flan_t5_xl
num_bytes: 41459126
num_examples: 14042
- name: google_flan_t5_xxl
num_bytes: 41459114
num_examples: 14042
- name: google_flan_ul2
num_bytes: 41459135
num_examples: 14042
- name: ibm_mistralai_merlinite_7b
num_bytes: 41561220
num_examples: 14042
- name: mistralai_mixtral_8x7b_instruct_v01
num_bytes: 41571382
num_examples: 14042
- name: mistralai_mistral_7b_instruct_v0_2
num_bytes: 41571983
num_examples: 14042
- name: google_gemma_7b
num_bytes: 55595994
num_examples: 14042
- name: google_gemma_7b_it
num_bytes: 47270289
num_examples: 14042
- name: tiiuae_falcon_40b
num_bytes: 41557405
num_examples: 14042
- name: mistralai_mistral_7b_v0_1
num_bytes: 41571452
num_examples: 14042
- name: tiiuae_falcon_180b
num_bytes: 41557406
num_examples: 14042
download_size: 156606916
dataset_size: 643341014
configs:
- config_name: format_0
data_files:
- split: meta_llama_llama_3_8b
path: format_0/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_0/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_0/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_0/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_0/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_0/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_0/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_0/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_0/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_0/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_0/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_0/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_0/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_0/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_0/tiiuae_falcon_180b-*
- config_name: format_104
data_files:
- split: meta_llama_llama_3_8b
path: format_104/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_104/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_104/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_104/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_104/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_104/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_104/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_104/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_104/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_104/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_104/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_104/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_104/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_104/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_104/tiiuae_falcon_180b-*
- config_name: format_110
data_files:
- split: meta_llama_llama_3_8b
path: format_110/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_110/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_110/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_110/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_110/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_110/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_110/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_110/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_110/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_110/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_110/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_110/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_110/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_110/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_110/tiiuae_falcon_180b-*
- config_name: format_111
data_files:
- split: meta_llama_llama_3_8b
path: format_111/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_111/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_111/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_111/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_111/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_111/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_111/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_111/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_111/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_111/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_111/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_111/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_111/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_111/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_111/tiiuae_falcon_180b-*
- config_name: format_112
data_files:
- split: meta_llama_llama_3_8b
path: format_112/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_112/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_112/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_112/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_112/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_112/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_112/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_112/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_112/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_112/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_112/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_112/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_112/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_112/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_112/tiiuae_falcon_180b-*
- config_name: format_113
data_files:
- split: meta_llama_llama_3_8b
path: format_113/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_113/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_113/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_113/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_113/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_113/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_113/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_113/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_113/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_113/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_113/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_113/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_113/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_113/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_113/tiiuae_falcon_180b-*
- config_name: format_120
data_files:
- split: meta_llama_llama_3_8b
path: format_120/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_120/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_120/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_120/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_120/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_120/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_120/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_120/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_120/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_120/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_120/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_120/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_120/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_120/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_120/tiiuae_falcon_180b-*
- config_name: format_122
data_files:
- split: meta_llama_llama_3_8b
path: format_122/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_122/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_122/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_122/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_122/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_122/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_122/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_122/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_122/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_122/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_122/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_122/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_122/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_122/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_122/tiiuae_falcon_180b-*
- config_name: format_123
data_files:
- split: meta_llama_llama_3_8b
path: format_123/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_123/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_123/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_123/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_123/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_123/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_123/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_123/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_123/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_123/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_123/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_123/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_123/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_123/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_123/tiiuae_falcon_180b-*
- config_name: format_124
data_files:
- split: meta_llama_llama_3_8b
path: format_124/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_124/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_124/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_124/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_124/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_124/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_124/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_124/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_124/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_124/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_124/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_124/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_124/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_124/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_124/tiiuae_falcon_180b-*
- config_name: format_128
data_files:
- split: meta_llama_llama_3_8b
path: format_128/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_128/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_128/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_128/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_128/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_128/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_128/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_128/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_128/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_128/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_128/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_128/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_128/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_128/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_128/tiiuae_falcon_180b-*
- config_name: format_132
data_files:
- split: meta_llama_llama_3_8b
path: format_132/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_132/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_132/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_132/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_132/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_132/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_132/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_132/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_132/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_132/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_132/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_132/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_132/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_132/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_132/tiiuae_falcon_180b-*
- config_name: format_133
data_files:
- split: meta_llama_llama_3_8b
path: format_133/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_133/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_133/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_133/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_133/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_133/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_133/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_133/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_133/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_133/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_133/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_133/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_133/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_133/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_133/tiiuae_falcon_180b-*
- config_name: format_138
data_files:
- split: meta_llama_llama_3_8b
path: format_138/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_138/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_138/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_138/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_138/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_138/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_138/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_138/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_138/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_138/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_138/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_138/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_138/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_138/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_138/tiiuae_falcon_180b-*
- config_name: format_140
data_files:
- split: meta_llama_llama_3_8b
path: format_140/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_140/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_140/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_140/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_140/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_140/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_140/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_140/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_140/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_140/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_140/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_140/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_140/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_140/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_140/tiiuae_falcon_180b-*
- config_name: format_141
data_files:
- split: meta_llama_llama_3_8b
path: format_141/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_141/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_141/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_141/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_141/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_141/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_141/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_141/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_141/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_141/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_141/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_141/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_141/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_141/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_141/tiiuae_falcon_180b-*
- config_name: format_144
data_files:
- split: meta_llama_llama_3_8b
path: format_144/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_144/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_144/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_144/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_144/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_144/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_144/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_144/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_144/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_144/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_144/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_144/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_144/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_144/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_144/tiiuae_falcon_180b-*
- config_name: format_147
data_files:
- split: meta_llama_llama_3_8b
path: format_147/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_147/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_147/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_147/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_147/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_147/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_147/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_147/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_147/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_147/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_147/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_147/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_147/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_147/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_147/tiiuae_falcon_180b-*
- config_name: format_148
data_files:
- split: meta_llama_llama_3_8b
path: format_148/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_148/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_148/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_148/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_148/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_148/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_148/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_148/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_148/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_148/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_148/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_148/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_148/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_148/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_148/tiiuae_falcon_180b-*
- config_name: format_149
data_files:
- split: meta_llama_llama_3_8b
path: format_149/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_149/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_149/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_149/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_149/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_149/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_149/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_149/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_149/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_149/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_149/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_149/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_149/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_149/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_149/tiiuae_falcon_180b-*
- config_name: format_154
data_files:
- split: meta_llama_llama_3_8b
path: format_154/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_154/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_154/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_154/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_154/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_154/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_154/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_154/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_154/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_154/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_154/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_154/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_154/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_154/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_154/tiiuae_falcon_180b-*
- config_name: format_155
data_files:
- split: meta_llama_llama_3_8b
path: format_155/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_155/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_155/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_155/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_155/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_155/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_155/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_155/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_155/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_155/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_155/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_155/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_155/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_155/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_155/tiiuae_falcon_180b-*
- config_name: format_158
data_files:
- split: meta_llama_llama_3_8b
path: format_158/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_158/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_158/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_158/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_158/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_158/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_158/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_158/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_158/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_158/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_158/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_158/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_158/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_158/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_158/tiiuae_falcon_180b-*
- config_name: format_16
data_files:
- split: meta_llama_llama_3_8b
path: format_16/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_16/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_16/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_16/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_16/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_16/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_16/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_16/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_16/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_16/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_16/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_16/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_16/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_16/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_16/tiiuae_falcon_180b-*
- config_name: format_161
data_files:
- split: meta_llama_llama_3_8b
path: format_161/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_161/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_161/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_161/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_161/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_161/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_161/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_161/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_161/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_161/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_161/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_161/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_161/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_161/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_161/tiiuae_falcon_180b-*
- config_name: format_162
data_files:
- split: meta_llama_llama_3_8b
path: format_162/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_162/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_162/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_162/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_162/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_162/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_162/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_162/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_162/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_162/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_162/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_162/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_162/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_162/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_162/tiiuae_falcon_180b-*
- config_name: format_163
data_files:
- split: meta_llama_llama_3_8b
path: format_163/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_163/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_163/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_163/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_163/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_163/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_163/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_163/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_163/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_163/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_163/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_163/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_163/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_163/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_163/tiiuae_falcon_180b-*
- config_name: format_166
data_files:
- split: meta_llama_llama_3_8b
path: format_166/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_166/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_166/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_166/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_166/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_166/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_166/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_166/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_166/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_166/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_166/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_166/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_166/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_166/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_166/tiiuae_falcon_180b-*
- config_name: format_169
data_files:
- split: meta_llama_llama_3_8b
path: format_169/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_169/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_169/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_169/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_169/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_169/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_169/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_169/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_169/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_169/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_169/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_169/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_169/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_169/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_169/tiiuae_falcon_180b-*
- config_name: format_170
data_files:
- split: meta_llama_llama_3_8b
path: format_170/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_170/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_170/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_170/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_170/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_170/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_170/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_170/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_170/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_170/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_170/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_170/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_170/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_170/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_170/tiiuae_falcon_180b-*
- config_name: format_171
data_files:
- split: meta_llama_llama_3_8b
path: format_171/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_171/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_171/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_171/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_171/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_171/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_171/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_171/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_171/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_171/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_171/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_171/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_171/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_171/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_171/tiiuae_falcon_180b-*
- config_name: format_181
data_files:
- split: meta_llama_llama_3_8b
path: format_181/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_181/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_181/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_181/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_181/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_181/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_181/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_181/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_181/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_181/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_181/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_181/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_181/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_181/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_181/tiiuae_falcon_180b-*
- config_name: format_182
data_files:
- split: meta_llama_llama_3_8b
path: format_182/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_182/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_182/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_182/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_182/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_182/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_182/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_182/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_182/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_182/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_182/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_182/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_182/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_182/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_182/tiiuae_falcon_180b-*
- config_name: format_183
data_files:
- split: meta_llama_llama_3_8b
path: format_183/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_183/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_183/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_183/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_183/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_183/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_183/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_183/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_183/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_183/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_183/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_183/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_183/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_183/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_183/tiiuae_falcon_180b-*
- config_name: format_19
data_files:
- split: meta_llama_llama_3_8b
path: format_19/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_19/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_19/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_19/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_19/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_19/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_19/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_19/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_19/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_19/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_19/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_19/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_19/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_19/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_19/tiiuae_falcon_180b-*
- config_name: format_190
data_files:
- split: meta_llama_llama_3_8b
path: format_190/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_190/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_190/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_190/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_190/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_190/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_190/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_190/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_190/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_190/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_190/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_190/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_190/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_190/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_190/tiiuae_falcon_180b-*
- config_name: format_197
data_files:
- split: meta_llama_llama_3_8b
path: format_197/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_197/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_197/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_197/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_197/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_197/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_197/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_197/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_197/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_197/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_197/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_197/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_197/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_197/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_197/tiiuae_falcon_180b-*
- config_name: format_20
data_files:
- split: meta_llama_llama_3_8b
path: format_20/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_20/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_20/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_20/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_20/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_20/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_20/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_20/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_20/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_20/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_20/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_20/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_20/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_20/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_20/tiiuae_falcon_180b-*
- config_name: format_200
data_files:
- split: meta_llama_llama_3_8b
path: format_200/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_200/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_200/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_200/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_200/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_200/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_200/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_200/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_200/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_200/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_200/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_200/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_200/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_200/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_200/tiiuae_falcon_180b-*
- config_name: format_204
data_files:
- split: meta_llama_llama_3_8b
path: format_204/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_204/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_204/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_204/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_204/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_204/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_204/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_204/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_204/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_204/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_204/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_204/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_204/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_204/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_204/tiiuae_falcon_180b-*
- config_name: format_207
data_files:
- split: meta_llama_llama_3_8b
path: format_207/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_207/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_207/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_207/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_207/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_207/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_207/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_207/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_207/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_207/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_207/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_207/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_207/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_207/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_207/tiiuae_falcon_180b-*
- config_name: format_214
data_files:
- split: meta_llama_llama_3_8b
path: format_214/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_214/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_214/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_214/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_214/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_214/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_214/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_214/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_214/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_214/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_214/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_214/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_214/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_214/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_214/tiiuae_falcon_180b-*
- config_name: format_215
data_files:
- split: meta_llama_llama_3_8b
path: format_215/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_215/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_215/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_215/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_215/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_215/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_215/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_215/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_215/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_215/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_215/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_215/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_215/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_215/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_215/tiiuae_falcon_180b-*
- config_name: format_222
data_files:
- split: meta_llama_llama_3_8b
path: format_222/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_222/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_222/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_222/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_222/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_222/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_222/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_222/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_222/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_222/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_222/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_222/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_222/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_222/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_222/tiiuae_falcon_180b-*
- config_name: format_226
data_files:
- split: meta_llama_llama_3_8b
path: format_226/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_226/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_226/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_226/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_226/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_226/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_226/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_226/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_226/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_226/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_226/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_226/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_226/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_226/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_226/tiiuae_falcon_180b-*
- config_name: format_227
data_files:
- split: meta_llama_llama_3_8b
path: format_227/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_227/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_227/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_227/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_227/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_227/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_227/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_227/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_227/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_227/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_227/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_227/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_227/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_227/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_227/tiiuae_falcon_180b-*
- config_name: format_229
data_files:
- split: meta_llama_llama_3_8b
path: format_229/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_229/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_229/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_229/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_229/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_229/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_229/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_229/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_229/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_229/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_229/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_229/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_229/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_229/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_229/tiiuae_falcon_180b-*
- config_name: format_230
data_files:
- split: meta_llama_llama_3_8b
path: format_230/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_230/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_230/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_230/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_230/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_230/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_230/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_230/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_230/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_230/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_230/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_230/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_230/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_230/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_230/tiiuae_falcon_180b-*
- config_name: format_241
data_files:
- split: meta_llama_llama_3_8b
path: format_241/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_241/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_241/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_241/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_241/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_241/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_241/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_241/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_241/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_241/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_241/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_241/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_241/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_241/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_241/tiiuae_falcon_180b-*
- config_name: format_243
data_files:
- split: meta_llama_llama_3_8b
path: format_243/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_243/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_243/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_243/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_243/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_243/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_243/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_243/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_243/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_243/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_243/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_243/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_243/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_243/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_243/tiiuae_falcon_180b-*
- config_name: format_244
data_files:
- split: meta_llama_llama_3_8b
path: format_244/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_244/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_244/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_244/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_244/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_244/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_244/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_244/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_244/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_244/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_244/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_244/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_244/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_244/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_244/tiiuae_falcon_180b-*
- config_name: format_248
data_files:
- split: meta_llama_llama_3_8b
path: format_248/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_248/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_248/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_248/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_248/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_248/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_248/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_248/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_248/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_248/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_248/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_248/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_248/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_248/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_248/tiiuae_falcon_180b-*
- config_name: format_249
data_files:
- split: meta_llama_llama_3_8b
path: format_249/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_249/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_249/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_249/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_249/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_249/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_249/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_249/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_249/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_249/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_249/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_249/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_249/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_249/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_249/tiiuae_falcon_180b-*
- config_name: format_250
data_files:
- split: meta_llama_llama_3_8b
path: format_250/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_250/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_250/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_250/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_250/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_250/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_250/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_250/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_250/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_250/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_250/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_250/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_250/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_250/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_250/tiiuae_falcon_180b-*
- config_name: format_252
data_files:
- split: meta_llama_llama_3_8b
path: format_252/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_252/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_252/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_252/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_252/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_252/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_252/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_252/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_252/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_252/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_252/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_252/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_252/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_252/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_252/tiiuae_falcon_180b-*
- config_name: format_258
data_files:
- split: meta_llama_llama_3_8b
path: format_258/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_258/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_258/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_258/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_258/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_258/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_258/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_258/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_258/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_258/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_258/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_258/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_258/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_258/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_258/tiiuae_falcon_180b-*
- config_name: format_260
data_files:
- split: meta_llama_llama_3_8b
path: format_260/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_260/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_260/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_260/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_260/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_260/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_260/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_260/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_260/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_260/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_260/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_260/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_260/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_260/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_260/tiiuae_falcon_180b-*
- config_name: format_261
data_files:
- split: meta_llama_llama_3_8b
path: format_261/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_261/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_261/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_261/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_261/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_261/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_261/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_261/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_261/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_261/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_261/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_261/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_261/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_261/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_261/tiiuae_falcon_180b-*
- config_name: format_266
data_files:
- split: meta_llama_llama_3_8b
path: format_266/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_266/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_266/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_266/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_266/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_266/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_266/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_266/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_266/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_266/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_266/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_266/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_266/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_266/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_266/tiiuae_falcon_180b-*
- config_name: format_267
data_files:
- split: meta_llama_llama_3_8b
path: format_267/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_267/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_267/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_267/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_267/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_267/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_267/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_267/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_267/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_267/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_267/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_267/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_267/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_267/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_267/tiiuae_falcon_180b-*
- config_name: format_268
data_files:
- split: meta_llama_llama_3_8b
path: format_268/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_268/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_268/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_268/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_268/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_268/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_268/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_268/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_268/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_268/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_268/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_268/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_268/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_268/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_268/tiiuae_falcon_180b-*
- config_name: format_272
data_files:
- split: meta_llama_llama_3_8b
path: format_272/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_272/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_272/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_272/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_272/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_272/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_272/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_272/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_272/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_272/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_272/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_272/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_272/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_272/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_272/tiiuae_falcon_180b-*
- config_name: format_276
data_files:
- split: meta_llama_llama_3_8b
path: format_276/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_276/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_276/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_276/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_276/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_276/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_276/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_276/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_276/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_276/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_276/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_276/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_276/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_276/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_276/tiiuae_falcon_180b-*
- config_name: format_278
data_files:
- split: meta_llama_llama_3_8b
path: format_278/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_278/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_278/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_278/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_278/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_278/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_278/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_278/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_278/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_278/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_278/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_278/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_278/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_278/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_278/tiiuae_falcon_180b-*
- config_name: format_280
data_files:
- split: meta_llama_llama_3_8b
path: format_280/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_280/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_280/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_280/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_280/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_280/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_280/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_280/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_280/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_280/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_280/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_280/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_280/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_280/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_280/tiiuae_falcon_180b-*
- config_name: format_282
data_files:
- split: meta_llama_llama_3_8b
path: format_282/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_282/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_282/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_282/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_282/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_282/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_282/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_282/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_282/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_282/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_282/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_282/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_282/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_282/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_282/tiiuae_falcon_180b-*
- config_name: format_286
data_files:
- split: meta_llama_llama_3_8b
path: format_286/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_286/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_286/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_286/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_286/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_286/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_286/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_286/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_286/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_286/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_286/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_286/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_286/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_286/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_286/tiiuae_falcon_180b-*
- config_name: format_290
data_files:
- split: meta_llama_llama_3_8b
path: format_290/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_290/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_290/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_290/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_290/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_290/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_290/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_290/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_290/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_290/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_290/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_290/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_290/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_290/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_290/tiiuae_falcon_180b-*
- config_name: format_294
data_files:
- split: meta_llama_llama_3_8b
path: format_294/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_294/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_294/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_294/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_294/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_294/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_294/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_294/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_294/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_294/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_294/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_294/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_294/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_294/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_294/tiiuae_falcon_180b-*
- config_name: format_296
data_files:
- split: meta_llama_llama_3_8b
path: format_296/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_296/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_296/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_296/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_296/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_296/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_296/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_296/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_296/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_296/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_296/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_296/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_296/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_296/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_296/tiiuae_falcon_180b-*
- config_name: format_298
data_files:
- split: meta_llama_llama_3_8b
path: format_298/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_298/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_298/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_298/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_298/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_298/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_298/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_298/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_298/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_298/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_298/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_298/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_298/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_298/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_298/tiiuae_falcon_180b-*
- config_name: format_300
data_files:
- split: meta_llama_llama_3_8b
path: format_300/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_300/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_300/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_300/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_300/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_300/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_300/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_300/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_300/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_300/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_300/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_300/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_300/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_300/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_300/tiiuae_falcon_180b-*
- config_name: format_301
data_files:
- split: meta_llama_llama_3_8b
path: format_301/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_301/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_301/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_301/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_301/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_301/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_301/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_301/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_301/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_301/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_301/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_301/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_301/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_301/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_301/tiiuae_falcon_180b-*
- config_name: format_31
data_files:
- split: meta_llama_llama_3_8b
path: format_31/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_31/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_31/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_31/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_31/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_31/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_31/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_31/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_31/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_31/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_31/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_31/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_31/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_31/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_31/tiiuae_falcon_180b-*
- config_name: format_32
data_files:
- split: meta_llama_llama_3_8b
path: format_32/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_32/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_32/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_32/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_32/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_32/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_32/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_32/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_32/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_32/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_32/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_32/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_32/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_32/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_32/tiiuae_falcon_180b-*
- config_name: format_35
data_files:
- split: meta_llama_llama_3_8b
path: format_35/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_35/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_35/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_35/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_35/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_35/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_35/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_35/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_35/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_35/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_35/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_35/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_35/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_35/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_35/tiiuae_falcon_180b-*
- config_name: format_37
data_files:
- split: meta_llama_llama_3_8b
path: format_37/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_37/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_37/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_37/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_37/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_37/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_37/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_37/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_37/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_37/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_37/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_37/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_37/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_37/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_37/tiiuae_falcon_180b-*
- config_name: format_41
data_files:
- split: meta_llama_llama_3_8b
path: format_41/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_41/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_41/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_41/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_41/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_41/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_41/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_41/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_41/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_41/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_41/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_41/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_41/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_41/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_41/tiiuae_falcon_180b-*
- config_name: format_42
data_files:
- split: meta_llama_llama_3_8b
path: format_42/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_42/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_42/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_42/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_42/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_42/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_42/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_42/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_42/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_42/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_42/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_42/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_42/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_42/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_42/tiiuae_falcon_180b-*
- config_name: format_45
data_files:
- split: meta_llama_llama_3_8b
path: format_45/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_45/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_45/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_45/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_45/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_45/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_45/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_45/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_45/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_45/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_45/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_45/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_45/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_45/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_45/tiiuae_falcon_180b-*
- config_name: format_46
data_files:
- split: meta_llama_llama_3_8b
path: format_46/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_46/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_46/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_46/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_46/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_46/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_46/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_46/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_46/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_46/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_46/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_46/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_46/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_46/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_46/tiiuae_falcon_180b-*
- config_name: format_47
data_files:
- split: meta_llama_llama_3_8b
path: format_47/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_47/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_47/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_47/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_47/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_47/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_47/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_47/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_47/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_47/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_47/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_47/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_47/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_47/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_47/tiiuae_falcon_180b-*
- config_name: format_48
data_files:
- split: meta_llama_llama_3_8b
path: format_48/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_48/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_48/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_48/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_48/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_48/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_48/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_48/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_48/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_48/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_48/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_48/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_48/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_48/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_48/tiiuae_falcon_180b-*
- config_name: format_50
data_files:
- split: meta_llama_llama_3_8b
path: format_50/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_50/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_50/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_50/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_50/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_50/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_50/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_50/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_50/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_50/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_50/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_50/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_50/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_50/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_50/tiiuae_falcon_180b-*
- config_name: format_51
data_files:
- split: meta_llama_llama_3_8b
path: format_51/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_51/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_51/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_51/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_51/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_51/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_51/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_51/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_51/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_51/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_51/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_51/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_51/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_51/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_51/tiiuae_falcon_180b-*
- config_name: format_55
data_files:
- split: meta_llama_llama_3_8b
path: format_55/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_55/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_55/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_55/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_55/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_55/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_55/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_55/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_55/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_55/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_55/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_55/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_55/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_55/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_55/tiiuae_falcon_180b-*
- config_name: format_59
data_files:
- split: meta_llama_llama_3_8b
path: format_59/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_59/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_59/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_59/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_59/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_59/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_59/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_59/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_59/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_59/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_59/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_59/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_59/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_59/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_59/tiiuae_falcon_180b-*
- config_name: format_63
data_files:
- split: meta_llama_llama_3_8b
path: format_63/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_63/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_63/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_63/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_63/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_63/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_63/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_63/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_63/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_63/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_63/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_63/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_63/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_63/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_63/tiiuae_falcon_180b-*
- config_name: format_66
data_files:
- split: meta_llama_llama_3_8b
path: format_66/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_66/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_66/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_66/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_66/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_66/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_66/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_66/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_66/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_66/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_66/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_66/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_66/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_66/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_66/tiiuae_falcon_180b-*
- config_name: format_7
data_files:
- split: meta_llama_llama_3_8b
path: format_7/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_7/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_7/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_7/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_7/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_7/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_7/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_7/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_7/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_7/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_7/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_7/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_7/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_7/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_7/tiiuae_falcon_180b-*
- config_name: format_71
data_files:
- split: meta_llama_llama_3_8b
path: format_71/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_71/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_71/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_71/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_71/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_71/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_71/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_71/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_71/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_71/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_71/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_71/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_71/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_71/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_71/tiiuae_falcon_180b-*
- config_name: format_72
data_files:
- split: meta_llama_llama_3_8b
path: format_72/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_72/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_72/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_72/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_72/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_72/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_72/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_72/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_72/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_72/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_72/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_72/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_72/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_72/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_72/tiiuae_falcon_180b-*
- config_name: format_75
data_files:
- split: meta_llama_llama_3_8b
path: format_75/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_75/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_75/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_75/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_75/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_75/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_75/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_75/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_75/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_75/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_75/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_75/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_75/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_75/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_75/tiiuae_falcon_180b-*
- config_name: format_76
data_files:
- split: meta_llama_llama_3_8b
path: format_76/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_76/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_76/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_76/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_76/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_76/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_76/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_76/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_76/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_76/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_76/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_76/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_76/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_76/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_76/tiiuae_falcon_180b-*
- config_name: format_8
data_files:
- split: meta_llama_llama_3_8b
path: format_8/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_8/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_8/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_8/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_8/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_8/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_8/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_8/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_8/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_8/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_8/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_8/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_8/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_8/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_8/tiiuae_falcon_180b-*
- config_name: format_87
data_files:
- split: meta_llama_llama_3_8b
path: format_87/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_87/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_87/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_87/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_87/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_87/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_87/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_87/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_87/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_87/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_87/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_87/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_87/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_87/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_87/tiiuae_falcon_180b-*
- config_name: format_94
data_files:
- split: meta_llama_llama_3_8b
path: format_94/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_94/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_94/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_94/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_94/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_94/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_94/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_94/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_94/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_94/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_94/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_94/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_94/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_94/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_94/tiiuae_falcon_180b-*
- config_name: format_95
data_files:
- split: meta_llama_llama_3_8b
path: format_95/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_95/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_95/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_95/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_95/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_95/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_95/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_95/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_95/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_95/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_95/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_95/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_95/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_95/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_95/tiiuae_falcon_180b-*
- config_name: format_96
data_files:
- split: meta_llama_llama_3_8b
path: format_96/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_96/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_96/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_96/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_96/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_96/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_96/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_96/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_96/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_96/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_96/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_96/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_96/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_96/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_96/tiiuae_falcon_180b-*
- config_name: format_97
data_files:
- split: meta_llama_llama_3_8b
path: format_97/meta_llama_llama_3_8b-*
- split: meta_llama_llama_3_8b_instruct
path: format_97/meta_llama_llama_3_8b_instruct-*
- split: meta_llama_llama_3_70b_instruct
path: format_97/meta_llama_llama_3_70b_instruct-*
- split: codellama_codellama_34b_instruct
path: format_97/codellama_codellama_34b_instruct-*
- split: google_flan_t5_xl
path: format_97/google_flan_t5_xl-*
- split: google_flan_t5_xxl
path: format_97/google_flan_t5_xxl-*
- split: google_flan_ul2
path: format_97/google_flan_ul2-*
- split: ibm_mistralai_merlinite_7b
path: format_97/ibm_mistralai_merlinite_7b-*
- split: mistralai_mixtral_8x7b_instruct_v01
path: format_97/mistralai_mixtral_8x7b_instruct_v01-*
- split: mistralai_mistral_7b_instruct_v0_2
path: format_97/mistralai_mistral_7b_instruct_v0_2-*
- split: google_gemma_7b
path: format_97/google_gemma_7b-*
- split: google_gemma_7b_it
path: format_97/google_gemma_7b_it-*
- split: tiiuae_falcon_40b
path: format_97/tiiuae_falcon_40b-*
- split: mistralai_mistral_7b_v0_1
path: format_97/mistralai_mistral_7b_v0_1-*
- split: tiiuae_falcon_180b
path: format_97/tiiuae_falcon_180b-*
---
# MMLU Multi-Prompt Evaluation Data
## Overview
This dataset contains the results of a comprehensive evaluation of various Large Language Models (LLMs) using multiple prompt templates on the Massive Multitask Language Understanding (MMLU) benchmark. The data is introduced in
[Maia Polo, Felipe, Ronald Xu, Lucas Weber, Mírian Silva, Onkar Bhardwaj, Leshem Choshen, Allysson Flavio Melo de Oliveira, Yuekai Sun, and Mikhail Yurochkin. "Efficient multi-prompt evaluation of LLMs." arXiv preprint arXiv:2405.17202 (2024).](https://arxiv.org/abs/2405.17202)
## Dataset Details
The [MMLU](https://huggingface.co/datasets/cais/mmlu) benchmark comprises 57 diverse subjects and approximately 14,000 examples. It is a multiple-choice question-answering benchmark that tests the performance of LLMs across a wide range of topics. The data includes evaluation for 15 different SOTA LLMs and 100 different prompt templates.
The data from a specific prompt template (format), can be downloaded using
```python
from datasets import load_dataset
j=0
data = load_dataset('PromptEval/tinyMMLU', f'format_{j}')
```
If you are only interested in the correctness scores, please check this lighter version of this dataset [here](https://huggingface.co/datasets/PromptEval/PromptEval_MMLU_correctness).
## Citing
@article{polo2024efficient,
title={Efficient multi-prompt evaluation of LLMs},
author={Polo, Felipe Maia and Xu, Ronald and Weber, Lucas and Silva, M{\'\i}rian and Bhardwaj, Onkar and Choshen, Leshem and de Oliveira, Allysson Flavio Melo and Sun, Yuekai and Yurochkin, Mikhail},
journal={arXiv preprint arXiv:2405.17202},
year={2024}
}
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
} |
Subsets and Splits