datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
4.67M
| likes
int64 0
7.57k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
47
⌀ | createdAt
unknown | card
stringlengths 15
1.02M
|
---|---|---|---|---|---|---|---|---|
docintel/ChartQA | docintel | "2025-02-25T21:11:28Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T21:11:24Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image
dtype: image
splits:
- name: test
num_bytes: 115872506.0
num_examples: 2500
download_size: 72614164
dataset_size: 115872506.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
tturing/so100_03_robotp | tturing | "2025-02-25T21:16:50Z" | 3 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"biJCR"
] | [
"robotics"
] | "2025-02-25T21:16:48Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- biJCR
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 895,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.rscam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
kaamd/tdngl-pfj | kaamd | "2025-02-25T21:18:17Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T21:18:16Z" | ---
dataset_info:
features:
- name: url
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 816990
num_examples: 218
download_size: 300464
dataset_size: 816990
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/instruction_filtering_scale_up_math_base_fasttext_per_domain | mlfoundations-dev | "2025-02-25T21:47:31Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T21:27:48Z" | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 3836385.712082465
num_examples: 16000
download_size: 2084942
dataset_size: 3836385.712082465
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/instruction_filtering_scale_up_math_base_gemini_length | mlfoundations-dev | "2025-02-26T04:39:09Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T21:27:48Z" | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: source
dtype: string
- name: gemini_response
dtype: string
- name: __original_row_idx
dtype: int64
- name: length
dtype: int64
splits:
- name: train
num_bytes: 51624263.2
num_examples: 16000
download_size: 24998340
dataset_size: 51624263.2
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nicher92/acsl_c_code_pairs_filtered_v1 | nicher92 | "2025-02-25T21:46:21Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T21:46:19Z" | ---
dataset_info:
features:
- name: success
dtype: bool
- name: failures
sequence: 'null'
- name: output_from_frama_c
dtype: string
- name: error_messages
sequence: string
- name: was_it_fixed
dtype: bool
- name: acsl_snippet
dtype: string
- name: c_code_snippet
dtype: string
- name: extracted_error
dtype: string
- name: total_goals
dtype: int64
- name: verified_goals
dtype: int64
- name: rest_of_file
dtype: string
splits:
- name: train
num_bytes: 15355030.166362368
num_examples: 589
download_size: 802276
dataset_size: 15355030.166362368
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Mohamed-DLM/asr_en_ar_switch_split_93_final_updated | Mohamed-DLM | "2025-02-25T22:05:13Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T21:46:38Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 6053199.0
num_examples: 55
download_size: 5404911
dataset_size: 6053199.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HumanoidTeam/aloha_cube_binary_old_format_v1_test | HumanoidTeam | "2025-02-25T21:54:47Z" | 3 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2025-02-25T21:54:37Z" | ---
task_categories:
- robotics
tags:
- LeRobot
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
tturing/so100_03_tools0 | tturing | "2025-02-25T21:57:59Z" | 3 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"biJCR"
] | [
"robotics"
] | "2025-02-25T21:57:57Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- biJCR
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 895,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.rscam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Mohamed-DLM/asr_en_ar_switch_split_94_final_updated | Mohamed-DLM | "2025-02-25T22:20:17Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T22:08:01Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 4824660.0
num_examples: 53
download_size: 4348383
dataset_size: 4824660.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1un9i13/country | 1un9i13 | "2025-02-25T22:15:53Z" | 3 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T22:12:58Z" | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: country
dtype: string
- name: capital
dtype: string
- name: continent
dtype: string
splits:
- name: train
num_bytes: 6827
num_examples: 194
download_size: 6066
dataset_size: 6827
---
|
cchoi1/humaneval-datagen-run-3_best_att_50_sol_50_20250225_085059 | cchoi1 | "2025-02-25T22:14:52Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T22:14:51Z" | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_attack
dtype: string
- name: chosen_attack_explanation
dtype: string
- name: chosen_solution
dtype: string
- name: chosen_solution_explanation
dtype: string
- name: chosen_solve_rate
dtype: float64
- name: rejected_attack
dtype: string
- name: rejected_attack_explanation
dtype: string
- name: rejected_solution
dtype: string
- name: rejected_solution_explanation
dtype: string
- name: rejected_solve_rate
dtype: float64
splits:
- name: train
num_bytes: 5255818
num_examples: 2100
download_size: 287701
dataset_size: 5255818
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tturing/so100_03_books | tturing | "2025-02-25T22:22:48Z" | 3 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"biJCR"
] | [
"robotics"
] | "2025-02-25T22:22:46Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- biJCR
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 895,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.rscam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
nellaep/NellFam | nellaep | "2025-02-25T22:25:27Z" | 3 | 0 | [
"license:llama3.3",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T22:24:44Z" | ---
license: llama3.3
---
|
mjpsm/meba_fam_info | mjpsm | "2025-02-25T22:25:41Z" | 3 | 0 | [
"license:llama3.2",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T22:25:05Z" | ---
license: llama3.2
---
|
HumanoidTeam/aloha_cube_binary_old_format_v1_test_2 | HumanoidTeam | "2025-02-25T22:37:17Z" | 3 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2025-02-25T22:37:10Z" | ---
task_categories:
- robotics
tags:
- LeRobot
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
datalab-to/marker_benchmark_comparison_olmocr_llm | datalab-to | "2025-02-26T00:07:48Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T23:40:28Z" | ---
dataset_info:
features:
- name: uuid
dtype: int64
- name: classification
dtype: string
- name: language
dtype: string
- name: img
dtype: image
- name: marker_md
dtype: string
- name: marker_img
dtype: image
- name: marker_heuristic
dtype: float64
- name: marker_heuristic_detail
dtype: string
- name: marker_llm
dtype: int64
- name: marker_llm_detail
dtype: string
- name: olmocr_md
dtype: string
- name: olmocr_img
dtype: image
- name: olmocr_heuristic
dtype: float64
- name: olmocr_heuristic_detail
dtype: string
- name: olmocr_llm
dtype: float64
- name: olmocr_llm_detail
dtype: string
splits:
- name: train
num_bytes: 16136900.0
num_examples: 25
download_size: 16012100
dataset_size: 16136900.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sghosts/ocr-thesis-surya | sghosts | "2025-02-25T23:44:04Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T23:43:57Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: pdf_path
dtype: string
- name: page_num
dtype: int64
- name: surya
dtype: string
splits:
- name: train
num_bytes: 29819014.0
num_examples: 236
download_size: 29620799
dataset_size: 29819014.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hanR07/safe_packets | hanR07 | "2025-02-26T06:06:16Z" | 3 | 0 | [
"language:ko",
"license:unknown",
"region:us"
] | null | "2025-02-26T00:18:46Z" | ---
license: unknown
language:
- ko
--- |
obiwan96/obiwan96open_web_math_qav3_none_120000_140000 | obiwan96 | "2025-02-26T00:34:18Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T00:21:22Z" | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: is_solution_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
- name: is_backtrack
dtype: string
- name: backtrack_count
dtype: string
- name: backtrack_rationale
dtype: string
- name: is_backchain
dtype: string
- name: backchain_count
dtype: string
- name: backchain_rationale
dtype: string
- name: is_verification
dtype: string
- name: verification_count
dtype: string
- name: verification_rationale
dtype: string
- name: contain_problem
dtype: string
- name: contain_solution
dtype: string
- name: domain_broad
dtype: string
- name: domain_specific
dtype: string
- name: solution_rationale
dtype: string
- name: raw_qa
dtype: string
- name: query
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 122814908
num_examples: 7463
download_size: 53808656
dataset_size: 122814908
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CohenQu/math_reasoning_benchmark | CohenQu | "2025-02-26T00:55:32Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T00:41:22Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
splits:
- name: AMC2023
num_bytes: 11158
num_examples: 40
- name: MinervaMATH
num_bytes: 120182
num_examples: 272
- name: MATH500
num_bytes: 104912
num_examples: 500
- name: AIME2024
num_bytes: 10081
num_examples: 30
- name: AIME2025
num_bytes: 14629
num_examples: 30
download_size: 288780
dataset_size: 260962
configs:
- config_name: default
data_files:
- split: MinervaMATH
path: data/MinervaMATH-*
- split: MATH500
path: data/MATH500-*
- split: AIME2024
path: data/AIME2024-*
- split: AIME2025
path: data/AIME2025-*
- split: AMC2023
path: data/AMC2023-*
---
|
Asap7772/Asap7772open_web_math_backtrack_40k__10000_20000 | Asap7772 | "2025-02-26T02:56:36Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T01:25:14Z" | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: is_solution_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
- name: is_backtrack
dtype: string
- name: backtrack_count
dtype: string
- name: backtrack_rationale
dtype: string
- name: is_backchain
dtype: string
- name: backchain_count
dtype: string
- name: backchain_rationale
dtype: string
- name: is_verification
dtype: string
- name: verification_count
dtype: string
- name: verification_rationale
dtype: string
- name: contain_problem
dtype: string
- name: contain_solution
dtype: string
- name: domain_broad
dtype: string
- name: domain_specific
dtype: string
- name: solution_rationale
dtype: string
- name: raw_qa
dtype: string
- name: query
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 180026824
num_examples: 9299
download_size: 67270673
dataset_size: 180026824
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/Asap7772open_web_math_backtrack_40k__30000_40000 | Asap7772 | "2025-02-26T03:03:04Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T01:26:40Z" | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: is_solution_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
- name: is_backtrack
dtype: string
- name: backtrack_count
dtype: string
- name: backtrack_rationale
dtype: string
- name: is_backchain
dtype: string
- name: backchain_count
dtype: string
- name: backchain_rationale
dtype: string
- name: is_verification
dtype: string
- name: verification_count
dtype: string
- name: verification_rationale
dtype: string
- name: contain_problem
dtype: string
- name: contain_solution
dtype: string
- name: domain_broad
dtype: string
- name: domain_specific
dtype: string
- name: solution_rationale
dtype: string
- name: raw_qa
dtype: string
- name: query
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 181501339
num_examples: 9279
download_size: 67138457
dataset_size: 181501339
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/Asap7772open_web_math_backtrack_40k__20000_30000 | Asap7772 | "2025-02-26T02:58:21Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T01:26:54Z" | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: is_solution_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
- name: is_backtrack
dtype: string
- name: backtrack_count
dtype: string
- name: backtrack_rationale
dtype: string
- name: is_backchain
dtype: string
- name: backchain_count
dtype: string
- name: backchain_rationale
dtype: string
- name: is_verification
dtype: string
- name: verification_count
dtype: string
- name: verification_rationale
dtype: string
- name: contain_problem
dtype: string
- name: contain_solution
dtype: string
- name: domain_broad
dtype: string
- name: domain_specific
dtype: string
- name: solution_rationale
dtype: string
- name: raw_qa
dtype: string
- name: query
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 179211784
num_examples: 9296
download_size: 66465496
dataset_size: 179211784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/Asap7772open_web_math_backtrack_40k__0_10000 | Asap7772 | "2025-02-26T02:58:06Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T01:27:34Z" | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: is_solution_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
- name: is_backtrack
dtype: string
- name: backtrack_count
dtype: string
- name: backtrack_rationale
dtype: string
- name: is_backchain
dtype: string
- name: backchain_count
dtype: string
- name: backchain_rationale
dtype: string
- name: is_verification
dtype: string
- name: verification_count
dtype: string
- name: verification_rationale
dtype: string
- name: contain_problem
dtype: string
- name: contain_solution
dtype: string
- name: domain_broad
dtype: string
- name: domain_specific
dtype: string
- name: solution_rationale
dtype: string
- name: raw_qa
dtype: string
- name: query
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 178173958
num_examples: 9299
download_size: 66110635
dataset_size: 178173958
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mic7ch/manchu_sub2 | mic7ch | "2025-02-26T02:11:07Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T02:10:41Z" | ---
dataset_info:
features:
- name: im
dtype: image
- name: roman
dtype: string
- name: manchu
dtype: string
splits:
- name: train
num_bytes: 204002861.6
num_examples: 60000
- name: validation
num_bytes: 51000715.4
num_examples: 15000
download_size: 256577300
dataset_size: 255003577.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
jeanwei0721/finetuning_demo | jeanwei0721 | "2025-02-26T02:46:38Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T02:46:37Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 37394
num_examples: 100
download_size: 6542
dataset_size: 37394
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gswamy/pythia-1.4B-tldr-dpo_tldr_pythia_1.4b_tword_badbase_1_rm_sft_tldr_pythia_1.4b_tword_1_r_iter_1 | gswamy | "2025-02-26T03:12:01Z" | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T03:11:56Z" | ---
dataset_info:
features:
- name: iter_1_best_query_response
sequence: int64
- name: iter_1_worst_query_response
sequence: int64
- name: iter_1_best_mask
sequence: int64
- name: iter_1_worst_mask
sequence: int64
- name: iter_1_best_reward
dtype: float64
- name: iter_1_worst_reward
dtype: float64
splits:
- name: train
num_bytes: 1545157120
num_examples: 92858
download_size: 32595389
dataset_size: 1545157120
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ahmetipekci10/Delphi7 | ahmetipekci10 | "2025-02-26T03:31:50Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T03:30:55Z" | ---
dataset_info:
features:
- name: conversations
dtype: string
splits:
- name: train
num_bytes: 1535301.7980636237
num_examples: 650
- name: test
num_bytes: 172426.20193637622
num_examples: 73
download_size: 691993
dataset_size: 1707728.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
tttx/3k-unsolved-priority-022525-step1-collated | tttx | "2025-02-26T04:41:33Z" | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T04:35:01Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 8343757.456140351
num_examples: 400
- name: test
num_bytes: 21049
num_examples: 1
download_size: 2273413
dataset_size: 8364806.456140351
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
group2sealion/11mil_clean | group2sealion | "2025-02-26T05:13:09Z" | 3 | 0 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T04:59:47Z" | ---
license: apache-2.0
---
|
cchoi1/humaneval-datagen-run-1_best_att_50_sol_50_20250225_153517 | cchoi1 | "2025-02-26T05:01:19Z" | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T05:01:17Z" | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_attack
dtype: string
- name: chosen_attack_explanation
dtype: string
- name: chosen_solution
dtype: string
- name: chosen_solution_explanation
dtype: string
- name: chosen_solve_rate
dtype: float64
- name: rejected_attack
dtype: string
- name: rejected_attack_explanation
dtype: string
- name: rejected_solution
dtype: string
- name: rejected_solution_explanation
dtype: string
- name: rejected_solve_rate
dtype: float64
splits:
- name: train
num_bytes: 5337625
num_examples: 2157
download_size: 292082
dataset_size: 5337625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/open_r1_hf_get_all_proofs | mlfoundations-dev | "2025-02-26T06:09:35Z" | 3 | 1 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-26T05:24:44Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: problem_is_valid
dtype: string
- name: solution_is_valid
dtype: string
- name: source
dtype: string
- name: synthetic
dtype: bool
- name: generations
sequence: string
- name: generations_count
dtype: int64
- name: correctness_math_verify
sequence: bool
- name: correct_count
dtype: int64
- name: generation
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 13999569108
num_examples: 140670
download_size: 5154483543
dataset_size: 13999569108
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
datatab/SerbianOscarDataset | datatab | "2023-06-04T14:34:49Z" | 2 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-06-04T14:21:08Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 374855299.3164062
num_examples: 3037283
- name: test
num_bytes: 46856989.550781436
num_examples: 379661
- name: valid
num_bytes: 46856866.13281237
num_examples: 379660
download_size: 328089963
dataset_size: 468569155.0
---
# Dataset Card for "SerbianOscarDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
datalama/koqp | datalama | "2023-06-21T06:22:28Z" | 2 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-06-21T06:14:53Z" | ---
license: mit
dataset_info:
features:
- name: id
dtype: int64
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': 다른 질문
'1': 같은 질문
splits:
- name: train
num_bytes: 634021
num_examples: 6888
- name: test
num_bytes: 62628
num_examples: 688
download_size: 403049
dataset_size: 696649
---
## Dataset Description
songys님이 오픈소스로 공개한 Question_pair 데이터셋을 약간의 데이터 수정을 거쳐 업로드한 데이터셋.
원본 데이터셋과 자세한 설명은 아래 repo 참고
- **Repository: https://github.com/songys/Question_pair**
**수정 사항**
- `is_duplicate`를 `label`이라는 필드로 rename함.
- test set의 `test_id`를 `id`로 rename함.
- 기존 0, 1에 대한 label을 반대로 변경함.
- as-is
- {"같은 질문": 0, "다른 질문": 1}
- to-be
- {"같은 질문": 1, "다른 질문": 0}
- 최종 field는 'id', 'question1', 'question2', 'label'를 선택하여 저장함.
## Dataset Structure
```
DatasetDict({
train: Dataset({
features: ['id', 'question1', 'question2', 'label'],
num_rows: 6888
})
test: Dataset({
features: ['id', 'question1', 'question2', 'label'],
num_rows: 688
})
})
```
|
tamdiep106/autotrain-data-tam_jp | tamdiep106 | "2023-06-23T10:46:11Z" | 2 | 0 | [
"language:ja",
"region:us"
] | null | "2023-06-23T09:01:33Z" | ---
language:
- ja
---
# AutoTrain Dataset for project: tam_jp
## Dataset Description
This dataset has been automatically processed by AutoTrain for project tam_jp.
### Languages
The BCP-47 code for the dataset's language is ja.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "\u30dd\u30fc\u306f\u30b8\u30e3\u30fc\u30ca\u30ea\u30ba\u30e0\u306e\u6d3b\u767a\u306a\u30dc\u30eb\u30c6\u30a3\u30e2\u30a2\u3092\u751f\u6d3b\u306e\u5834\u306b\u5b9a\u3081\u3001\u30af\u30ec\u30e0\u53d4\u6bcd\u306e\u5bb6\u306b\u5c45\u5019\u3092\u3057\u306a\u304c\u3089(\u5b9f\u5144\u306e\u30a6\u30a3\u30ea\u30a2\u30e0=\u30d8\u30f3\u30ea\u30fc\u306f\u7d50\u6838\u30671831\u5e748\u6708\u306b\u6b7b\u53bb\u3057\u3066\u3044\u305f)\u77ed\u7de8\u5c0f\u8aac\u306e\u57f7\u7b46\u3092\u59cb\u3081\u305f\u30021832\u5e74\u306e1\u6708\u3001\u300e\u30b5\u30bf\u30c7\u30fc\u30fb\u30af\u30aa\u30ea\u30a2\u300f\u8a8c\u306b\u300c\u30e1\u30c3\u30c4\u30a7\u30f3\u30ac\u30fc\u30b7\u30e5\u30bf\u30a4\u30f3\u300d\u304c\u63a1\u7528\u3055\u308c\u3001\u4ee5\u5f8c\u540c\u8a8c\u306b\u300c\u30aa\u30e0\u30ec\u30c3\u30c8\u4faf\u7235\u300d\u300c\u30a8\u30eb\u30b5\u30ec\u30e0\u306e\u7269\u8a9e\u300d\u300c\u606f\u306e\u55aa\u5931\u300d\u300c\u30d0\u30fc\u30b2\u30f3\u306e\u640d\u5931(\u306e\u3061\u300c\u30dc\u30f3\u30dc\u30f3\u300d\u3068\u3057\u3066\u6539\u7b46)\u300d\u304c\u63b2\u8f09\u30011833\u5e74\u304b\u3089\u306f\u300e\u30b5\u30bf\u30c7\u30fc\u30fb\u30f4\u30a3\u30b8\u30bf\u30fc\u300f\u8a8c\u306b\u8a69\u3084\u77ed\u6587\u3092\u63b2\u8f09\u3057\u305f\u3002\u3053\u306e\u9803\u3061\u3087\u3046\u3069\u540c\u300e\u30b5\u30bf\u30c7\u30fc\u30fb\u30f4\u30a3\u30b8\u30bf\u30fc\u300f\u8a8c\u304c\u77ed\u7de8\u3068\u8a69\u306e\u61f8\u8cde\u3092\u6253\u3061\u51fa\u3057\u305f\u305f\u3081\u3001\u30dd\u30fc\u306f\u300e\u30d5\u30a9\u30fc\u30ea\u30aa\u30fb\u30af\u30e9\u30d6\u7269\u8a9e\u300f\u3068\u540d\u3065\u3051\u305f\u77ed\u7de86\u7de8\u3068\u8a69\u3092\u6295\u7a3f\u3001\u3053\u306e\u3046\u3061\u77ed\u7de8\u300c\u58dc\u306e\u4e2d\u306e\u624b\u8a18\u300d\u304c\u6700\u512a\u79c0\u4f5c\u306b\u9078\u3070\u308c\u8cde\u91d150\u30c9\u30eb\u3092\u7372\u5f97\u3057\u305f\u3002\n\n\u3055\u3089\u306b\u30dd\u30fc\u306f\u3001\u3053\u306e\u3068\u304d\u5be9\u67fb\u54e1\u3092\u52d9\u3081\u3066\u3044\u305f\u30dc\u30eb\u30c6\u30a3\u30e2\u30a2\u306e\u8457\u540d\u306a\u653f\u6cbb\u5bb6\u3067\u3042\u308a\u4f5c\u5bb6\u3067\u3042\u3063\u305f\u3001\u30b8\u30e7\u30f3\u30fbP\u30fb\u30b1\u30cd\u30c7\u30a3\u3068\u89aa\u3057\u304f\u306a\u308a\u3001\u5f7c\u306e\u65a1\u65cb\u3067\u30ea\u30c3\u30c1\u30e2\u30f3\u30c9\u306e\u300e\u30b5\u30b6\u30f3\u30fb\u30ea\u30c6\u30e9\u30ea\u30fc\u30fb\u30e1\u30c3\u30bb\u30f3\u30b8\u30e3\u30fc\u300f\u8a8c\u306b\u4f5c\u54c1\u3092\u63b2\u8f09\u3059\u308b\u3088\u3046\u306b\u306a\u3063\u305f\u3002\u3055\u3089\u306b\u305d\u306e\u5f8c\u540c\u8a8c\u306e\u7de8\u96c6\u9577\u304c\u9000\u8077\u3059\u308b\u3068\u3001\u30b1\u30cd\u30c7\u30a3\u306e\u63a8\u85a6\u3067\u300e\u30e1\u30c3\u30bb\u30f3\u30b8\u30e3\u30fc\u300f\u8a8c\u306e\u4e3b\u7b46\u7de8\u96c6\u8005\u3068\u3057\u3066\u8fce\u3048\u3089\u308c\u308b\u3053\u3068\u306b\u306a\u3063\u305f\u3002\u3057\u304b\u3057\u3053\u306e\u9803\u3001\u30dd\u30fc\u306f\u307e\u3060\u5c11\u5973\u3067\u3042\u3063\u305f\u5f93\u59b9\u306e\u30f4\u30a1\u30fc\u30b8\u30cb\u30a2\u3078\u6c42\u5a5a\u3057\u3001\u305d\u308c\u3092\u53d4\u6bcd\u30de\u30e9\u30a4\u30a2\u306b\u62d2\u7d76\u3055\u308c\u3066\u3044\u305f\u3053\u3068\u304b\u3089\u98f2\u9152\u306e\u91cf\u304c\u5897\u3048\u308b\u306a\u3069\u3057\u3066\u5fc3\u60c5\u304c\u8352\u308c\u3066\u304a\u308a\u3001\u300e\u30e1\u30c3\u30bb\u30f3\u30b8\u30e3\u30fc\u300f\u8a8c\u306e\u8077\u3092\u77ed\u671f\u9593\u3067\u8f9e\u3057\u3066\u3057\u307e\u3063\u305f\u3002\u3057\u304b\u3057\u5ea6\u91cd\u306a\u308b\u30dd\u30fc\u306e\u8aac\u5f97\u306b\u30de\u30e9\u30a4\u30a2\u304c\u6298\u308c\u30011833\u5e749\u6708\u306b\u30dc\u30eb\u30c6\u30a3\u30e2\u30a2\u306e\u90e1\u88c1\u5224\u6240\u304b\u3089\u7d50\u5a5a\u8a31\u53ef\u3092\u53d7\u3051\u305f\u3002\u5f53\u6642\u30dd\u30fc\u306f26\u6b73\u3001\u30f4\u30a1\u30fc\u30b8\u30cb\u30a2\u306f\u307e\u3060\u7d50\u5a5a\u4e0d\u53ef\u80fd\u306a13\u6b731\u304b\u6708\u3067\u3042\u3063\u305f\u304c\u3001\u7d50\u5a5a\u8a93\u7d04\u66f8\u306b\u306f21\u6b73\u3068\u8a18\u3055\u308c\u3066\u3044\u305f\u3002",
"question": "\u30dd\u30fc\u306f\u300c\u58dc\u306e\u4e2d\u306e\u624b\u8a18\u300d\u304c\u6700\u512a\u79c0\u4f5c\u306b\u9078\u3070\u308c\u308b\u3053\u3069\u3067\u8cde\u91d1\u3044\u304f\u3089\u3092\u7372\u5f97\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u307e\u3057\u305f\u304b\u3002",
"answers.text": [
"50\u30c9\u30eb"
],
"answers.answer_start": [
315
],
"feat_id": [
"tr-170-08-002"
],
"feat_title": [
"\u30a8\u30c9\u30ac\u30fc\u30fb\u30a2\u30e9\u30f3\u30fb\u30dd\u30fc"
],
"feat_question_type": [
"Syntactic variation"
],
"feat_answers.answer_type": [
[
"Object"
]
]
},
{
"context": "\u56fd\u969b\u9023\u5408\u98df\u7ce7\u8fb2\u696d\u6a5f\u95a2(FAO)\u306e\u7d71\u8a08\u306b\u3088\u308c\u3070\u30011950\u5e74\u4ee3\u306b\u306f10\u4e07\u30c8\u30f3\u4f59\u308a\u3067\u3042\u3063\u305f\u4e16\u754c\u306e\u30ca\u30de\u30ba\u76ee\u9b5a\u985e\u306e\u7dcf\u6f01\u7372\u91cf\u306f\u5e74\u3005\u5897\u52a0\u3057\u30011990\u5e74\u4ee3\u5f8c\u534a\u306b\u306f100\u4e07\u30c8\u30f3\u3092\u8d85\u3048\u305f\u3002\n2000\u5e74\u4ee3\u4ee5\u964d\u3082\u5897\u52a0\u306e\u52e2\u3044\u306f\u8870\u3048\u305a\u30012000\u5e74\u306b120\u4e07\u30c8\u30f3\u3060\u3063\u305f\u4e16\u754c\u306e\u7dcf\u6f01\u7372\u91cf\u306f\u30012006\u5e74\u306e\u6642\u70b9\u3067\u500d\u4ee5\u4e0a\u306e260\u4e07\u30c8\u30f3\u306b\u9054\u3057\u3066\u3044\u308b\u3002\n\u5730\u57df\u5225\u306b\u898b\u308b\u3068\u30a2\u30b8\u30a2\u30fb\u30a2\u30d5\u30ea\u30ab\u5730\u57df\u3067\u306e\u4f38\u3073\u304c\u9855\u8457\u3067\u3001\u7279\u306b\u30a2\u30b8\u30a2\u3067\u306f2000\u301c2006\u5e74\u306b\u304b\u3051\u3066\u7d043\u500d\u306e\u5897\u52a0(60\u4e07\u30c8\u30f3\u2192180\u4e07\u30c8\u30f3)\u3092\u8a18\u9332\u3057\u3066\u3044\u308b\u3002\n\u540c\u3058\u671f\u9593\u306b\u304a\u3044\u3066\u3001\u5357\u5317\u30a2\u30e1\u30ea\u30ab\u3067\u306f40\u4e07\u30c8\u30f3\u53f0\u3001\u30e8\u30fc\u30ed\u30c3\u30d1\u3067\u306f1\u4e07\u30c8\u30f3\u53f0\u3067\u5927\u304d\u306a\u5909\u52d5\u3082\u306a\u304f\u63a8\u79fb\u3057\u3066\u304a\u308a\u3001\u8fd1\u5e74\u306e\u30a2\u30b8\u30a2\u5730\u57df\u306e\u4f38\u3073\u304c\u7a81\u51fa\u3057\u3066\u3044\u308b\u3053\u3068\u304c\u308f\u304b\u308b\u3002",
"question": "\u4e16\u754c\u306e\u30ca\u30de\u30ba\u76ee\u9b5a\u985e\u306e\u7dcf\u6f01\u7372\u91cf\u304c\u591a\u304b\u3063\u305f\u306e\u306f1950\u5e74\u4ee3\u30681990\u5e74\u4ee3\u5f8c\u534a\u306e\u3069\u3061\u3089\u3067\u3057\u305f\u304b?",
"answers.text": [
"1990\u5e74\u4ee3\u5f8c\u534a"
],
"answers.answer_start": [
63
],
"feat_id": [
"tr-419-18-000"
],
"feat_title": [
"\u30ca\u30de\u30ba\u76ee"
],
"feat_question_type": [
"Logical reasoning"
],
"feat_answers.answer_type": [
[
"Date/Time"
]
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
"feat_id": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_title": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_question_type": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_answers.answer_type": "Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 25396 |
| valid | 10289 |
|
ifmain/text-moderation-02-large | ifmain | "2024-06-27T08:13:38Z" | 2 | 5 | [
"task_categories:text-classification",
"language:en",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | "2024-05-22T20:02:03Z" | ---
task_categories:
- text-classification
language:
- en
size_categories:
- 100K<n<1M
---
This dataset based on https://www.kaggle.com/code/danofer/reddit-comments-scores-nlp/
The moderation dataset includes only 410 thousand rows 67% negative and 33% positive comments |
NikitaLitvinenko/merged_instruct_refactor2 | NikitaLitvinenko | "2024-09-20T09:07:38Z" | 2 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-29T09:31:47Z" | ---
dataset_info:
features:
- name: text
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 5261161528.085671
num_examples: 3604064
- name: test
num_bytes: 584574151.9143287
num_examples: 400452
download_size: 3092983764
dataset_size: 5845735680.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ifmain/text-moderation-02-multilingual | ifmain | "2024-10-13T13:50:00Z" | 2 | 0 | [
"language:en",
"language:de",
"language:fr",
"language:es",
"language:it",
"language:sv",
"language:fi",
"language:pl",
"language:cs",
"language:lv",
"language:zh",
"language:ja",
"language:ko",
"language:ru",
"language:uk",
"language:be",
"language:kk",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-13T11:37:25Z" | ---
license: apache-2.0
datasets:
- ifmain/text-moderation-410K
language:
- en
- de
- fr
- es
- it
- sv
- fi
- pl
- cs
- lv
- zh
- ja
- ko
- ru
- uk
- be
- kk
---
This dataset is based on [Kaggle](https://www.kaggle.com/code/danofer/reddit-comments-scores-nlp/).
It represents a version of [@ifmain/text-moderation-410K](https://huggingface.co/datasets/ifmain/text-moderation-410K) that has been cleansed of semantically similar values and normalized to a 50/50 ratio of negative and neutral entries.
The dataset contains 1.5M entries (91K * 17 languages).
Before use, augmentation is recommended! (e.g., character substitution to bypass moderation).
For augmentation, you can use [@ifmain/StringAugmentor](https://github.com/ifmain/StringAugmentor).
Enjoy using it! |
ifmain/search_in_text-01 | ifmain | "2024-11-14T11:51:17Z" | 2 | 1 | [
"task_categories:feature-extraction",
"language:ru",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"feature-extraction"
] | "2024-11-14T11:44:41Z" | ---
license: apache-2.0
task_categories:
- feature-extraction
language:
- ru
pretty_name: Search in text
size_categories:
- 1K<n<10K
---
This is a simple dataset generated by GPT-4o mini for accurate text search in Russian.
Format:
Story: a story between 500 and 1500 words
qa [list]:
- q: A question that has an answer as a quote in the text (if there’s no answer, it’s left blank)
- a: The answer to the question—if available, it's a quote; if not, the field is left blank
- reply: the starting and ending character positions of the quote (accuracy check)—if the answer is blank, reply is "from 0 to 0".
The dataset can be used in search models for files and websites. |
SKNahin/BanglaQwen-Train-Corpus | SKNahin | "2024-11-19T01:07:00Z" | 2 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-18T21:39:01Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: first_25k
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 378365114751
num_examples: 58063004
download_size: 164315825007
dataset_size: 378365114751
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NusaAksara/OCRData | NusaAksara | "2025-01-23T08:41:45Z" | 2 | 3 | [
"license:cc-by-nc-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-28T21:39:51Z" | ---
license: cc-by-nc-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image_path
dtype: string
- name: transcription
dtype: string
- name: transliteration
dtype: string
- name: translation
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 1327785
num_examples: 6434
download_size: 494748
dataset_size: 1327785
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
NusaAksara/Image-to-Segmentation | NusaAksara | "2024-12-19T05:44:47Z" | 2 | 0 | [
"license:cc-by-nc-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-19T03:33:50Z" | ---
license: cc-by-nc-4.0
configs:
- config_name: image_data
data_files:
- split: train
path: image_data/train-*
- config_name: segmentation_data
data_files:
- split: train
path: segmentation_data/train-*
dataset_info:
- config_name: image_data
features:
- name: image_id
dtype: string
- name: image_url
dtype: string
- name: height
dtype: int64
- name: width
dtype: int64
- name: language
dtype: string
splits:
- name: train
num_bytes: 46316
num_examples: 359
download_size: 8934
dataset_size: 46316
- config_name: segmentation_data
features:
- name: image_id
dtype: string
- name: segmentation_id
dtype: int64
- name: segmentation_information
sequence:
sequence: float64
splits:
- name: train
num_bytes: 691579
num_examples: 7516
download_size: 283289
dataset_size: 691579
---
# Segmentation Data Subset
- `image_id`: Refer to `image_id` on image_data subset
- `segmentation_id`: Segmentation identifier
- `segmentation_information`: COCO Format annotations: [[x1, y1, x2, y2, x3, y3, x4, y4]] |
MMInstruction/Video-T3-QA | MMInstruction | "2025-02-24T15:22:37Z" | 2 | 1 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | [
"question-answering"
] | "2024-12-23T02:31:27Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 100K<n<1M
---
---
license: apache-2.0
---
Textual Temporal Understanding Dataset
Temporal Reasoning Transfer from Text to Video, ICLR 2025
Project Page: https://video-t3.github.io/
In each json file, we provide LLaVA-style text QA samples, using the synthesization method described in our paper.
For example:
```json
[
{
"from": "human",
"value": "Based on the following captions describing keyframes of a video, answer the next question.\n\nCaptions:\nThe image displays a circular emblem with a metallic appearance, conveying a sense of authority and power, suggesting it could be a seal or a logo.\nThe image displays a circular emblem with a metallic appearance, conveying a sense of elegance and sophistication, suggesting it could be a seal or a logo.\nQuestion: How does the conveyed sense of the emblem change in the video?\n(A) from elegance and sophistication to authority and power\n(B) from simplicity and modernity to complexity and tradition\n(C) from authority and power to elegance and sophistication\n(D) from complexity and tradition to simplicity and modernity\n\nProvide only the top choice:\n"
},
{
"from": "gpt",
"value": "(C) from authority and power to elegance and sophistication"
}
]
```
You can adapt the sample to your training codebase for enhance the temporal understsanding ability of Video-LLMs.
Mixing the dataset with other image-text SFT samples would help mitigate potential forgetting issues.
The number of samples could be easily scaled up following the method described in Sec. 3 of the paper.
| Dataset | #Relevant Captions | #Distractor Captions | Description |
|---------|-------------------|---------------------|-------------|
| Order-GPT (N×) | 2~4 | N × 100 ± 50, N ∈ {1, 2, 4, 8} | Order-related questions generated by GPT-4. |
| Attribute (N×) | 2 | N × 100 ± 50, N ∈ {1, 2, 4, 8} | Attribute-related questions. |
| Order-Template (X) | 3~6 | 200±50 | Order-related questions based on templates X |
| Referring | 3 | 200±50 | Temporal referring questions. |
| Grounding | 3 | 200±50 | Temporal grounding questions. |
Mapping for the dataset with json files:
- Order-GPT: `order_train`
- Attribute: `attribute_train`
- Order-Template: `shuffle_phrase`, `shuffle_sentence`, `shuffle_prefix`
- Referring: `refer_begin_end_temp2any`
- Grounding: `refer_begin_end_any2temp`
## Citation
If you found this dataset to be helpful, please kindly cite our paper:
```bibtex
@inproceedings{li2025videot3,
author={Li, Lei and Liu, Yuanxin and Yao, Linli and Zhang, Peiyuan and An, Chenxin and Wang, Lean and Sun, Xu and Kong, Lingpeng and Liu, Qi},
title={Temporal Reasoning Transfer from Text to Video},
booktitle = {ICLR 2025},
publisher = {OpenReview.net},
year = {2025},
url = {https://openreview.net/forum?id=sHAvMp5J4R}
}
```
|
Vikhrmodels/librispeech_ru_quantized-wav-unify | Vikhrmodels | "2025-01-02T23:44:34Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-01-02T23:44:26Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: audio_tokens
sequence:
sequence: int64
splits:
- name: train
num_bytes: 115521510
num_examples: 54472
- name: validation
num_bytes: 3471081
num_examples: 1400
download_size: 26204194
dataset_size: 118992591
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
WinkingFace/CryptoLM-Ripple-XRP-USDT | WinkingFace | "2025-02-26T11:21:24Z" | 2 | 0 | [
"license:mit",
"region:us",
"finance",
"crypto",
"XRP",
"Ripple"
] | null | "2025-01-09T20:13:03Z" | ---
tags:
- finance
- crypto
- XRP
- Ripple
pretty_name: XRP/USDT
license: mit
---
# XRP Price Dataset with Technical Indicators
Welcome to the XRP / USDT Price Dataset with Technical Indicators, hosted by the WinkingFace Team. This dataset is designed to provide comprehensive historical data on XRP prices along with a variety of technical indicators to aid in cryptocurrency trading analysis and research. The dataset is updated every 3 minutes (delayed 1 minute).
## Dataset Description
This dataset includes the following columns:
- **timestamp**: The date and time of the data point in UTC (Coordinated Universal Time). This is a standard time reference that does not change with seasons or time zones.
- **open**: The opening price of XRP at the given timestamp.
- **high**: The highest price of XRP during the period.
- **low**: The lowest price of XRP during the period.
- **close**: The closing price of XRP at the given timestamp.
- **volume**: The trading volume of XRP during the period.
- **MA_20**: 20-period moving average.
- **MA_50**: 50-period moving average.
- **MA_200**: 200-period moving average.
- **RSI**: Relative Strength Index.
- **%K**: Stochastic Oscillator %K.
- **%D**: Stochastic Oscillator %D.
- **ADX**: Average Directional Index.
- **ATR**: Average True Range.
- **Trendline**: Calculated trendline value.
- **MACD**: Moving Average Convergence Divergence.
- **Signal**: Signal line for MACD.
- **Histogram**: MACD histogram.
- **BL_Upper**: Bollinger Bands Upper.
- **BL_Lower**: Bollinger Bands Lower.
- **MN_Upper**: Minopy Bands Upper.
- **MN_Lower**: Minopy Bands Lower.
## Usage
This dataset can be used for:
- Developing and testing cryptocurrency trading bots.
- Performing technical analysis on XRP price movements.
- Researching the effectiveness of various technical indicators.
- Training AI models for predictive analytics in cryptocurrency markets.
- Building machine learning models to forecast XRP price trends.
- Enhancing algorithmic trading strategies with historical data.
## Important Note
This dataset is provided for educational and research purposes only. It is not intended as financial advice. Please conduct your own research and consult with a financial advisor before making any investment decisions.
## Donate
If you find this dataset useful, please consider donating to support our continued development.
- **Bitcoin**: `bc1pcl6pj5k8t04nhhtrq0f5q4ya82kmldw8r6dzdw45uux5hanrkefswjp29r`
- **Ethereum**: `0xdc2ef164f5de92acb51fac2cb9ca1fbc43ab6991`
- **USDT**: `TDGMU3fJKmbTVRdGg8d9a7xud3uMKpFEe4`
- **USDC**: `Bi6DMiGm5YLXv5av87P8m1rUyKtXXrBwMbXnJRT8UQEA`
- **BNB**: `0xdc2ef164f5de92acb51fac2cb9ca1fbc43ab6991`
- **SOL**: `Bi6DMiGm5YLXv5av87P8m1rUyKtXXrBwMbXnJRT8UQEA`
- **TON**: `UQDr_VdpmXw2Wc2jX8FTFhhuyIFueremA5G78KzWhoQW9mOR`
- **TRX**: `TDGMU3fJKmbTVRdGg8d9a7xud3uMKpFEe4`
- **SUI**: `0x83d33b3af1f421deba5ceaa43b3e14cbe5f2169c7a684592f4a5df2e4382230f`
- **DOGE**: `DAe4LN3vYQmTHTrThRhzfZcEMCEBaxvAaH`
## Contributing
We welcome contributions to improve this dataset. Please feel free to open issues or submit pull requests.
## Contact
For any questions or inquiries, feel free to [contact us here 📨](mailto:contact@winkingfacehub.com). |
CNTXTAI0/arabic_dialects_question_and_answer | CNTXTAI0 | "2025-01-31T10:41:40Z" | 2 | 4 | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:ar",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"arabic",
"arabicdialicts",
"msa",
"Q&A",
"STEM",
"math"
] | [
"question-answering",
"text-generation",
"text2text-generation"
] | "2025-01-31T09:59:29Z" | ---
license: mit
task_categories:
- question-answering
- text-generation
- text2text-generation
language:
- ar
- en
tags:
- arabic
- arabicdialicts
- msa
- Q&A
- STEM
- math
pretty_name: Build your arabic model with arabic data
size_categories:
- 100K<n<1M
---
Data Content
The file provided: Q/A Reasoning dataset
contains the following columns:
1. 2. ID # : Denotes the reference ID for:
a. Question
b. Answer to the question
c. Hint
d. Reasoning
e. Word count for items a to d above
Dialects: Contains the following dialects in separate columns:
a. English
b. MSA
c. Emirati
d. Egyptian
e. Levantine Syria
f. Levantine Jordan
g. Levantine Palestine
h. Levantine Lebanon
Data Generation Process
The following are the steps that were followed to curate the data:
1. 2. A Question and its answer is generated in English.
The Hint and Reasoning for the Question and Answer is provided in subsequent
rows.
3. The Word Count row is populated with equivalent word count in the following format:
a. Word Count for Question & Answer - sums up total words in the Question and
the Answer
b. Word Count for Hint & Reasoning - sums up total count of words in the hint
and reasoning
c. Total Word Count - Sums up total words in categories a & b above.
4. Steps 1-3 is repeated across all Arabic dialects - MSA, Emirati, Egyptian, Levantine
Syria, Levantine Jordan, Levantine Palestine, Levantine Lebanon
www.cntxt.tech
support@cntxtai.ai
2
Data Review Process
CNTXT employs thorough review steps to ensure highest quality of data. The following
quality checks are conducted to ensure the output data is to the highest standards:
Post the conclusion of the data generation process, the review process starts. The team of
reviewers is in 2 layers:
1. Review Layer 1: The first set of reviewers are assigned to check the sentence
coherence, grammatical correctness and accuracy in the answers provided. If the
coherence and correctness of the QA is accurate, the review layer 1 passes it on to
Review Layer 2 else they submit the QAs back to annotators for regeneration
2. Review Layer 2: This layer of review checks the correctness of the hint and
reasoning sections as well as the word count accuracy. If these elements are all
correct, the item under review is considered ready for submission to the customer,
else the reviewer edits to ensure the accuracy of these elements and submits their
comments on items corrected.
The diagram below shows the steps described above:


Total Questions: 800
Total Answers:800
Total Hints:800
Total Reasoning:800
Total Question & Answer Word Count: 65,765
Total Hint & Reasoning Word Count:40,483
Total Word Count: 106, 248 |
NusaAksara/NusaAksara | NusaAksara | "2025-02-14T15:37:23Z" | 2 | 2 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-10T10:24:23Z" | ---
dataset_info:
- config_name: Image Segmentation
features:
- name: image_id
dtype: string
- name: image_url
dtype: string
- name: height
dtype: int64
- name: width
dtype: int64
- name: language
dtype: string
- name: segmentation_id
dtype: int64
- name: segmentation_information
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1571595
num_examples: 7516
download_size: 301685
dataset_size: 1571595
- config_name: Image Transcription (OCR)
features:
- name: image
dtype: string
- name: transcription
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 898838
num_examples: 6265
download_size: 223032
dataset_size: 898838
- config_name: Image Translation
features:
- name: image
dtype: string
- name: translation
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 745523
num_examples: 6265
download_size: 186798
dataset_size: 745523
- config_name: Image Transliteration
features:
- name: image
dtype: string
- name: transliteration
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 745746
num_examples: 6265
download_size: 191620
dataset_size: 745746
- config_name: Transcription LID
features:
- name: transcription
dtype: string
- name: language_label
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 424421
num_examples: 5556
download_size: 158519
dataset_size: 424421
- config_name: Transcription Translation
features:
- name: transcription
dtype: string
- name: translation
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 568166
num_examples: 5556
download_size: 275204
dataset_size: 568166
- config_name: Transcription Transliteration
features:
- name: transcription
dtype: string
- name: transliteration
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 584686
num_examples: 6265
download_size: 288686
dataset_size: 584686
- config_name: Transliteration LID
features:
- name: transliteration
dtype: string
- name: language_label
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 272877
num_examples: 5556
download_size: 127475
dataset_size: 272877
- config_name: Transliteration Translation
features:
- name: transliteration
dtype: string
- name: translation
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 416622
num_examples: 5556
download_size: 244160
dataset_size: 416622
- config_name: default
features:
- name: image
dtype: string
- name: transcription
dtype: string
- name: transliteration
dtype: string
- name: translation
dtype: string
- name: language_label
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 1375362
num_examples: 6433
download_size: 500093
dataset_size: 1375362
configs:
- config_name: Image Segmentation
data_files:
- split: train
path: Image Segmentation/train-*
- config_name: Image Transcription (OCR)
data_files:
- split: train
path: Image Transcription (OCR)/train-*
- config_name: Image Translation
data_files:
- split: train
path: Image Translation/train-*
- config_name: Image Transliteration
data_files:
- split: train
path: Image Transliteration/train-*
- config_name: Transcription LID
data_files:
- split: train
path: Transcription LID/train-*
- config_name: Transcription Translation
data_files:
- split: train
path: Transcription Translation/train-*
- config_name: Transcription Transliteration
data_files:
- split: train
path: Transcription Transliteration/train-*
- config_name: Transliteration LID
data_files:
- split: train
path: Transliteration LID/train-*
- config_name: Transliteration Translation
data_files:
- split: train
path: Transliteration Translation/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DrewLab/hu.MAP_3.0 | DrewLab | "2025-02-25T22:34:47Z" | 2 | 0 | [
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"PPIs"
] | null | "2025-02-12T23:12:52Z" | ---
license: cc-by-4.0
tags:
- biology
- PPIs
pretty_name: >-
hu.MAP3.0: Atlas of human protein complexes by integration of > 25,000
proteomic experiments.
repo: https://github.com/KDrewLab/huMAP3.0_analysis
---
# hu.MAP3.0: Atlas of human protein complexes by integration of > 25,000 proteomic experiments.
Proteins interact with each other and organize themselves into macromolecular machines (ie. complexes)
to carry out essential functions of the cell. We have a good understanding of a few complexes such as
the proteasome and the ribosome but currently we have an incomplete view of all protein complexes as
well as their functions. The hu.MAP attempts to address this lack of understanding by integrating several
large scale protein interaction datasets to obtain the most comprehensive view of protein complexes.
In hu.MAP 3.0 we integrated large scale affinity purification mass spectrometry (AP/MS) datasets from Bioplex,
Bioplex2.0, Bioplex3.0, Boldt et al. and Hein et al., large scale biochemical fractionation data (Wan et al.),
proximity labeling data (Gupta et al., Youn et al.), and RNA hairpin pulldown data (Treiber et al.) to produce
a complex map with over 15k complexes.
## Funding
NIH R00, NSF/BBSRC
## Citation
Samantha N. Fischer, Erin R Claussen, Savvas Kourtis, Sara Sdelci, Sandra Orchard, Henning Hermjakob, Georg Kustatscher, Kevin Drew hu.MAP3.0: Atlas of human protein complexes by integration of > 25,000 proteomic experiments BioRxiv https://doi.org/10.1101/2024.10.11.617930
## References
Kevin Drew, John B. Wallingford, Edward M. Marcotte hu.MAP 2.0: integration of over 15,000 proteomic experiments builds a global compendium of human multiprotein assemblies Mol Syst Biol (2021)17:e10016https://doi.org/10.15252/msb.202010016
Kevin Drew, Chanjae Lee, Ryan L Huizar, Fan Tu, Blake Borgeson, Claire D McWhite, Yun Ma, John B Wallingford, Edward M Marcotte Integration of over 9,000 mass spectrometry experiments builds a global map of human protein complexes. Molecular Systems Biology (2017) 13, 932. DOI 10.15252/msb.20167490
Huttlin et al. Dual proteome-scale networks reveal cell-specific remodeling of the human interactome Cell. 2021 May 27;184(11):3022-3040.e28. doi: 10.1016/j.cell.2021.04.011.
Huttlin et al. Architecture of the human interactome defines protein communities and disease networks. Nature. 2017 May 25;545(7655):505-509. DOI: 10.1038/nature22366.
Treiber et al. A Compendium of RNA-Binding Proteins that Regulate MicroRNA Biogenesis.. Mol Cell. 2017 Apr 20;66(2):270-284.e13. doi: 10.1016/j.molcel.2017.03.014.
Boldt et al. An organelle-specific protein landscape identifies novel diseases and molecular mechanisms. Nat Commun. 2016 May 13;7:11491. doi: 10.1038/ncomms11491.
Youn et al. High-Density Proximity Mapping Reveals the Subcellular Organization of mRNA-Associated Granules and Bodies. Mol Cell. 2018 Feb 1;69(3):517-532.e11. doi: 10.1016/j.molcel.2017.12.020.
Gupta et al. A Dynamic Protein Interaction Landscape of the Human Centrosome-Cilium Interface. Cell. 2015 Dec 3;163(6):1484-99. doi: 10.1016/j.cell.2015.10.065.
Wan, Borgeson et al. Panorama of ancient metazoan macromolecular complexes. Nature. 2015 Sep 17;525(7569):339-44. doi: 10.1038/nature14877. Epub 2015 Sep 7.
Hein et al. A human interactome in three quantitative dimensions organized by stoichiometries and abundances. Cell. 2015 Oct 22;163(3):712-23. doi: 10.1016/j.cell.2015.09.053. Epub 2015 Oct 22.
Huttlin et al. The BioPlex Network: A Systematic Exploration of the Human Interactome. Cell. 2015 Jul 16;162(2):425-40. doi: 10.1016/j.cell.2015.06.043.
Reimand et al. g:Profiler-a web server for functional interpretation of gene lists (2016 update). Nucleic Acids Res. 2016 Jul 8;44(W1):W83-9. doi: 10.1093/nar/gkw199.
## Associated code
Code examples using the [hu.MAP 3.0 model](https://huggingface.co/sfisch/hu.MAP3.0_AutoGluon) and downstream analysis can be found on our
[GitHub](https://github.com/KDrewLab/huMAP3.0_analysis)
# Usage
## Accessing the model
hu.MAP 3.0 was built using the auto-ML tool [AutoGluon](https://auto.gluon.ai/stable/index.html) and the [TabularPredictor](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html)
module is used train, test, and make predictions with the model.
This can be downloaded using the following:
$ pip install autogluon==0.4.0
Then it can be imported as:
>>> from autogluon.tabular import TabularPredictor
Note that to perform operations with our model the **0.4.0 version** must be used
Our [trained model](https://huggingface.co/sfisch/hu.MAP3.0_AutoGluon) can be downloaded through Huggingface using [huggingface_hub](https://huggingface.co/docs/hub/index)
>>> from huggingface_hub import snapshot_download
>>> model_dir = snapshot_download(repo_id="sfisch/hu.MAP3.0_AutoGluon")
>>> predictor = TabularPredictor.load(f"{model_dir}/huMAP3_20230503_complexportal_subset10kNEG_notScaled_accuracy")
## Using the training and test data
Both the train and test feature matrices can be loaded using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
This can be done from the command-line using:
$ pip install datasets
When loading into Python use the following:
>>> from datasets import load_dataset
>>> dataset = load_dataset('sfisch/hu.MAP3.0')
Training and test feature matrices can then be accessed as separate objects:
>>> train = dataset["train"].to_pandas()
>>> test = dataset["test"].to_pandas()
Jupyter notebooks containing more in-depth examples of model training, testing, and generating predictions can be found on our [GitHub](https://github.com/KDrewLab/huMAP3.0_analysis/huMAP3.0_model_devel)
## Accessing full feature matrix and all test/train interaction/complex files
All other files, such as the full feature matrix, can be accessed via Huggingface_hub.
>>> from huggingface_hub import hf_hub_download
>>> full_file = hf_hub_download(repo_id="sfisch/hu.MAP3.0", filename='full/humap3_full_feature_matrix_20220625.csv.gz', repo_type='dataset')
This just provides the file for download. Depending on your workflow, if you wish to use as a pandas dataframe for example:
>>> import pandas as pd
>>> full_featmat = pd.read_csv(full_file, compression="gzip")
The other complex/interaction files can be downloaded in the same manner. The files within the 'reference_interactions' directory
contain the complexes split from [Complex Portal](https://www.ebi.ac.uk/complexportal) into test and training sets. Within that directory you
will also find the pairwise protein interactions that were used as positive and negative interactions for both the test and training sets.
## Dataset card authors
Samantha Fischer (sfisch6@uic.edu) |
constantinedivis/test_ds_sig | constantinedivis | "2025-02-17T13:07:14Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-17T12:56:07Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: "*.parquet"
--- |
Geralt-Targaryen/C4-Advertisements | Geralt-Targaryen | "2025-02-25T11:03:04Z" | 2 | 0 | [
"license:odc-by",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T08:22:12Z" | ---
license: odc-by
---
35,916,631 advertisements from [C4](https://huggingface.co/datasets/Geralt-Targaryen/C4), filtered by a RoBERTa classifier trained on 550K annotations by Qwen2.5-32B-Instruct. |
jjeccles/3B-Instruct-DocHead-Concatenated02-25 | jjeccles | "2025-02-25T10:44:26Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T10:44:18Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 126293567
num_examples: 14546
download_size: 20850311
dataset_size: 126293567
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jjeccles/3B-Instruct-DocHead-OneToOneMix02-25 | jjeccles | "2025-02-25T10:48:56Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T10:48:53Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 24449517.714285713
num_examples: 2816
download_size: 5049827
dataset_size: 24449517.714285713
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tttx/3k-trash-ttt-022225-step1-collated | tttx | "2025-02-25T11:05:15Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T11:05:12Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 8910736.0
num_examples: 400
- name: test
num_bytes: 21389
num_examples: 1
download_size: 2467668
dataset_size: 8932125.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Swati-sd/ionization | Swati-sd | "2025-02-25T11:07:50Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T11:07:47Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: image
dtype: image
- name: mask_0
dtype: image
splits:
- name: train
num_bytes: 33779636.0
num_examples: 200
- name: test
num_bytes: 4835151.0
num_examples: 60
download_size: 38558074
dataset_size: 38614787.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
cchoi1/humaneval-datagen-run-1_best_att_50_sol_50_20250224_220437 | cchoi1 | "2025-02-25T11:24:48Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T11:24:45Z" | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_attack
dtype: string
- name: chosen_attack_explanation
dtype: string
- name: chosen_solution
dtype: string
- name: chosen_solution_explanation
dtype: string
- name: chosen_solve_rate
dtype: float64
- name: rejected_attack
dtype: string
- name: rejected_attack_explanation
dtype: string
- name: rejected_solution
dtype: string
- name: rejected_solution_explanation
dtype: string
- name: rejected_solve_rate
dtype: float64
splits:
- name: train
num_bytes: 5224342
num_examples: 2136
download_size: 283323
dataset_size: 5224342
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
okan11111/cebir2 | okan11111 | "2025-02-25T11:25:57Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T11:25:44Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 6369729.768269773
num_examples: 19162
- name: test
num_bytes: 708043.2317302274
num_examples: 2130
download_size: 3248865
dataset_size: 7077773.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
korbih/aguvis_1000_sft_1024_train | korbih | "2025-02-25T11:28:29Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T11:26:50Z" | ---
dataset_info:
features:
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
- name: images
sequence: image
- name: image_name
dtype: string
- name: base_uid
dtype: string
- name: step
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 528171031.41
num_examples: 6895
download_size: 468557736
dataset_size: 528171031.41
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
raahulrahl/distilabel-example | raahulrahl | "2025-02-25T11:29:30Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T11:29:28Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: generation
dtype: 'null'
- name: model_name
dtype: 'null'
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
dtype: 'null'
- name: raw_output_text_generation_0
dtype: 'null'
splits:
- name: train
num_bytes: 3872
num_examples: 10
download_size: 5449
dataset_size: 3872
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tttx/3k-trash-ttt-022225-step2-collated | tttx | "2025-02-25T11:50:12Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T11:50:09Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 8965047.111111112
num_examples: 400
- name: test
num_bytes: 22134
num_examples: 1
download_size: 2422122
dataset_size: 8987181.111111112
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
nmuendler/SWT-bench_Verified_bm25_27k_zsb | nmuendler | "2025-02-25T12:29:47Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.12952",
"arxiv:2310.06770",
"region:us"
] | null | "2025-02-25T12:08:48Z" | ---
dataset_info:
features:
- name: repo
dtype: string
- name: instance_id
dtype: string
- name: base_commit
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: version
dtype: string
- name: FAIL_TO_PASS
dtype: string
- name: PASS_TO_PASS
dtype: string
- name: environment_setup_commit
dtype: string
- name: difficulty
dtype: string
- name: hits
list:
- name: docid
dtype: string
- name: score
dtype: float64
- name: text
dtype: string
splits:
- name: test
num_bytes: 50867908
num_examples: 433
download_size: 21631632
dataset_size: 50867908
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
### Dataset Summary
SWT-bench *Verified* is _subset_ of [SWT-bench](https://huggingface.co/datasets/nmuendler/SWT-bench_bm25_27k_zsb), a dataset that tests systems’ ability to reproduce GitHub issues automatically. The dataset collects 433 test Issue-Pull Request pairs from 11 popular Python GitHub projects. Evaluation is performed by unit test verification using pre- and post-PR behavior of the test suite with and without the model proposed tests.
#### 📊🏆 Leaderboard
A public leaderboard for performance on SWT-bench is hosted at [swtbench.com](swtbench.com)
The dataset is released as part of the paper [SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents](https://arxiv.org/abs/2406.12952).
#### 🔎 Details
This dataset `SWT-bench_Verified_bm25_27k_zsp` includes a formatting of each instance using Pyserini's BM25 retrieval as described in the paper. The code context size limit is 27,000 `cl100k_base` tokens from the [`tiktoken`](https://github.com/openai/tiktoken) tokenization package used for OpenAI models.
The `text` column can be used directly with LMs to generate patch files and is formatted with the ZeroShotBase format prompt.
Models are instructed to generate a [`patch`](https://en.wikipedia.org/wiki/Patch_(Unix)) formatted file using the following template:
```diff
<patch>
diff
--- a/path/to/file.py
--- b/path/to/file.py
@@ -1,3 +1,3 @@
This is a test file.
-It contains several lines.
+It has been modified.
This is the third line.
</patch>
```
The dataset is based on [SWE-bench_Verified](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Verified) of [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](https://arxiv.org/abs/2310.06770) in [collaboration with OpenAI](https://openai.com/index/introducing-swe-bench-verified/).
This format can be used directly with the [SWE-bench inference scripts](https://github.com/princeton-nlp/SWE-bench/tree/main/inference). Please refer to these scripts for more details on inference.
|
gogolo1364/Start1_new | gogolo1364 | "2025-02-25T12:13:41Z" | 2 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T12:09:48Z" | ---
license: mit
---
|
TheOnlyDrWho/ZZZZ | TheOnlyDrWho | "2025-02-25T12:15:22Z" | 2 | 0 | [
"license:unknown",
"region:us"
] | null | "2025-02-25T12:14:29Z" | ---
license: unknown
---
|
tttx/3k-forcing-clipped-022225-step4-collated | tttx | "2025-02-25T12:29:47Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T12:29:38Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 7315142.0
num_examples: 336
- name: test
num_bytes: 22858
num_examples: 1
download_size: 1961620
dataset_size: 7338000.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
tttx/3k-trash-ttt-022225-step3-collated | tttx | "2025-02-25T12:36:14Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T12:36:11Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 8935367.111111112
num_examples: 400
- name: test
num_bytes: 19495
num_examples: 1
download_size: 2403178
dataset_size: 8954862.111111112
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
AAAA128/0R-deepseek-r1 | AAAA128 | "2025-02-25T12:52:37Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T12:52:32Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: generation
dtype: string
- name: distilabel_metadata
struct:
- name: raw_input_text_generation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation
dtype: string
- name: statistics_text_generation
struct:
- name: input_tokens
dtype: int64
- name: output_tokens
dtype: int64
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 73550
num_examples: 10
download_size: 68812
dataset_size: 73550
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mangopy/ToolRet-Queries1 | mangopy | "2025-02-25T12:54:10Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T12:53:58Z" | ---
dataset_info:
- config_name: gorilla-tensor
features:
- name: id
dtype: string
- name: query
dtype: string
- name: instruction
dtype: string
- name: labels
dtype: string
- name: category
dtype: string
splits:
- name: queries
num_bytes: 78995
num_examples: 55
download_size: 24985
dataset_size: 78995
- config_name: ultratool
features:
- name: id
dtype: string
- name: query
dtype: string
- name: instruction
dtype: string
- name: labels
dtype: string
- name: category
dtype: string
splits:
- name: queries
num_bytes: 763581
num_examples: 500
download_size: 134582
dataset_size: 763581
configs:
- config_name: gorilla-tensor
data_files:
- split: queries
path: gorilla-tensor/queries-*
- config_name: ultratool
data_files:
- split: queries
path: ultratool/queries-*
---
|
tttx/8k-priority-buffer-unclipped-overnight-4kbuffer-022525-step1-collated | tttx | "2025-02-25T12:57:45Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T12:57:42Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 2800577.0
num_examples: 79
- name: test
num_bytes: 40955
num_examples: 1
download_size: 796821
dataset_size: 2841532.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
clement520/GeomRel | clement520 | "2025-02-25T13:19:02Z" | 2 | 1 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"modality:text",
"arxiv:2501.13773",
"region:us"
] | [
"question-answering"
] | "2025-02-25T12:58:57Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 1K<n<10K
---
# GeomRel Dataset
GeomRel is a dataset designed for evaluating large language models (LLMs) on their ability to understand geometric relationships. The dataset contains questions related to basic geometric shapes and their properties, with a focus on recognizing and reasoning about spatial relationships between lines, angles, and figures.
The data in GeomRel is structured to test models' understanding of geometric concepts such as parallelism, perpendicularity, intersection, and other spatial relations, making it a useful benchmark for evaluating a model's spatial reasoning capabilities in the context of geometry.
## Dataset Structure
Each example in the dataset consists of a **question**, the **correct answer**, and the **relationship type**.
### Example Data Entry:
```json
{
"question": "In rectangle ABEF, AB=5. What is the relationship between line AB and line EF?\nAnswer choices:A. Parallel B. Perpendicular C. Intersecting but not perpendicular D. Cannot be inferred",
"answer": "A",
"relation": "LL_PA"
}
```
- **question**: The question presents a geometric scenario, often describing a figure and asking the model to deduce the relationship between various geometric elements.
- **answer**: The correct answer from a predefined list of answer choices.
- **relation**: A code representing the type of relationship the question addresses. For example:
- `LL_PA`: Parallel lines
- `LL_PE`: Perpendicular lines
- `LL_IN`: Intersecting lines
- `LL_CI`: Cannot be inferred (when the relationship cannot be determined from the given information)
- ...
## Key Features
- **Focus on Spatial Reasoning**: The dataset emphasizes reasoning about geometric relationships, including basic shapes like rectangles, triangles, and other polygons.
- **Multiple Answer Choices**: Each question provides several answer choices, designed to test the model’s ability to select the most appropriate answer based on the provided information.
- **Real-World Relevance**: Geometric reasoning is a foundational skill in many fields, such as computer vision, robotics, and architectural design. This dataset is intended to help assess and improve LLMs in their ability to handle such reasoning tasks.
## Use Cases
GeomRel is useful for:
- Benchmarking LLMs in the domain of geometry and spatial reasoning.
- Improving the performance of models on tasks involving geometric understanding.
- Research into how LLMs handle reasoning with structured, visual-spatial knowledge.
## Citation
If you use this dataset in your research, please cite the following paper:
```bibtex
@misc
{wang2025largelanguagemodelstruly,
title={Do Large Language Models Truly Understand Geometric Structures?},
author={Xiaofeng Wang and Yiming Wang and Wenhong Zhu and Rui Wang},
year={2025},
eprint={2501.13773},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.13773},
}
``` |
jdh-algo/JMED | jdh-algo | "2025-02-25T13:47:47Z" | 2 | 1 | [
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T13:00:44Z" | # Citrus: Leveraging Expert Cognitive Pathways in a Medical Language Model for Advanced Medical Decision Support
<p align="center">
<a href="https://arxiv.org/abs/2502.18274" target="_blank">📑Paper</a> |<a href="https://jdh-algo.github.io/Citrus/" target="_blank">🤗Github Page</a> |<a href="https://huggingface.co/jdh-algo/Citrus1.0-llama-70B" target="_blank">🤗Citrus1.0-Llama-70B</a> |<a href="https://huggingface.co/datasets/jdh-algo/Citrus_S3" target="_blank">📚Medical Reasoning Data</a> | <a href="https://huggingface.co/datasets/jdh-algo/JMED" target="_blank">📚Evaluation Data</a>
</p>
## The Introduction to Our Work
### 1. Main approaches
<div align="center">
<img src="https://raw.githubusercontent.com/jdh-algo/Citrus/main/static/images/figure4-1-2.png" alt="image" width="75%"/>
</div>
### 2. Overview of training stages and training data pipeline
<div align="center">
<img src="https://raw.githubusercontent.com/jdh-algo/Citrus/main/static/images/figure4-2-1.png" width="75%">
</div>
Citrus is a medical language model that bridges the gap between clinical expertise and AI reasoning by emulating the cognitive processes of medical experts. The model is trained on a large corpus of simulated expert disease reasoning data in sft-stage-3, synthesized using a novel approach that accurately captures the decision-making pathways of clinicians.
The contributions of this work are as follows:
1. We propose a training-free reasoning approach that emulates the cognitive processes of medical experts, enabling large language models to enhance their medical capabilities in clinical diagnosis and treatment.
2. In conjunction with the data construction method, we introduce a multi-stage post-training approach to further improve the model’s medical performance.
3. We have made the Citrus model and its training data publicly available as open-source resources to advance research in AI-driven medical decision-making.
4. We have developed and open-sourced a large-scale, updatable clinical practice evaluation dataset based on real-world data, accurately reflecting the distribution of patients in real-world settings.
In our work, we provide detailed insights into Citrus, covering its model architecture, dataset, and code. We are releasing Citrus_S3, the supervised fine-tuning (SFT) data for stage 3, which comprises 20,000 samples. We hope that our contributions will foster progress within the community in advancing the capabilities of large models.
## JDH Medical Practice Dataset: Construction and Validation of a Real-World Clinical Dialogue Benchmark
1. **Data Introduction:** The JMED, a novel dataset based on real-world medical data distributions. Unlike existing datasets, JMED closely mimics authentic clinical data while facilitating eective model training. Although based on real consultation data, it is not directly sourced from actual medical data, allowing us to incorporate key elements necessary for model training. We ensured compliance with ethical and legal standards throughout the data collection process, safeguarding privacy and meeting ethical guidelines. Due to the open-ended nature of medical consultations, where denitive answers are often elusive, the evaluation process is more challenging. To address this, each question includes 21 response options, with a "None of the above" choice. This design signicantly increases the complexity and diculty of distinguishing the correct answers, thereby providing a more rigorous assessment framework. We are initially releasing 1,000 MCQs and will update them periodically in the future.
2. **Data Format:** We consturcted JMED dataset as a set of multiple-choice questions (MCQs) based on the preprocessed data. Each data follows a general MCQs template format: `<id, question, options, answer>`. |
AbdallahhSaleh/Hindawi_tokenized | AbdallahhSaleh | "2025-02-25T13:22:00Z" | 2 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T13:08:24Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 688202580.0
num_examples: 882311
download_size: 278261356
dataset_size: 688202580.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rajmohan1122/finetuning_demo | rajmohan1122 | "2025-02-25T13:11:58Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T13:11:57Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 337422
num_examples: 1000
download_size: 15348
dataset_size: 337422
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bbunzeck/lexical-decision | bbunzeck | "2025-02-25T13:13:31Z" | 2 | 0 | [
"language:en",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2502.12835",
"region:us"
] | null | "2025-02-25T13:11:59Z" | ---
language:
- en
pretty_name: Lexical decision dataset (English)
---
This dataset contains words/sentences for lexical decision tests.
If you use this dataset, please cite the following preprint:
```
@misc{bunzeck2025subwordmodelsstruggleword,
title={Subword models struggle with word learning, but surprisal hides it},
author={Bastian Bunzeck and Sina Zarrieß},
year={2025},
eprint={2502.12835},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.12835},
}
``` |
polygraf-ai/multi-model-contextual-human-AI-v1-10K-with-title-formatted | polygraf-ai | "2025-02-25T13:13:53Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T13:13:52Z" | ---
dataset_info:
features:
- name: sub_source
dtype: string
- name: source
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 13641040
num_examples: 16200
- name: dev
num_bytes: 1532849
num_examples: 1800
download_size: 9274323
dataset_size: 15173889
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
|
Neph0s/CoSER | Neph0s | "2025-02-26T08:08:38Z" | 2 | 0 | [
"language:en",
"license:mit",
"arxiv:2502.09082",
"region:us"
] | null | "2025-02-25T13:20:09Z" | ---
license: mit
language:
- en
size_categories:
- 100M<n<1000M
---
# CoSER Dataset
## Overview
CoSER is a high-quality dataset for role-playing LLMs, sourced from 771 renowned novels. The dataset contains authentic multi-turn, multi-character dialogues extracted from acclaimed literary works.
## Key Features
- **Authentic Content**: Unlike synthetic datasets, CoSER extracts real dialogues from literature, maintaining high fidelity to the original works. The dialogues are inherently multi-turn and multi-character, exhibiting natural complexity and diversity.
- **Comprehensive Data Types**: Includes character profiles, dialogues, plot summaries, character experiences, and conversation backgrounds
- **Thoughts and Actions in Messages**: Captures characters' internal thoughts and physical actions beyond surface-level speech
- **Comprehensive Contextual Information for Simulation**: Provides rich contextual information of conversations, enabling role-playing LLMs to perform reasonable simulations in these scenarios. We refer to these simulations as *Given-Circumstance Acting* (GCA), which can be used to both train and evaluate role-playing LLMs.
## Dataset Structure
```
CoSER/
├── sft_sharegpt.json # Data formatted for SFT training
├── test_set.json # 200 test samples used in our paper
└── full/ # Complete extracted data from all books
├── A Game of Thrones (A Song of Ice and Fire, #1).json
├── A Tale of Two Cities.json
└── ...
```
## Safety Considerations
We have conducted safety checks on the dataset and removed potentially problematic content. Specifically, we truncated 110 sensitive conversations and removed a total of 602 messages. These conversations are marked with `truncated_for_safety_concerns=True` in the dataset.
## Citation
If you use this dataset in your research, please cite our paper:
```
@misc{wang2025cosercoordinatingllmbasedpersona,
title={CoSER: Coordinating LLM-Based Persona Simulation of Established Roles},
author={Xintao Wang and Heng Wang and Yifei Zhang and Xinfeng Yuan and Rui Xu and Jen-tse Huang and Siyu Yuan and Haoran Guo and Jiangjie Chen and Wei Wang and Yanghua Xiao and Shuchang Zhou},
year={2025},
eprint={2502.09082},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.09082},
}
```
|
Ian824/High-Resolution-Rainy-Image | Ian824 | "2025-02-26T11:23:08Z" | 2 | 0 | [
"task_categories:image-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"arxiv:2502.16421",
"region:us",
"rain"
] | [
"image-to-image"
] | "2025-02-25T13:25:58Z" | ---
license: cc-by-sa-4.0
task_categories:
- image-to-image
language:
- en
tags:
- rain
pretty_name: ' High-resolution Rainy Image'
size_categories:
- 1K<n<10K
---
# High-resolution Rainy Image Synthesis: Learning from Rendering
This is the dataset in the paper "High-resolution Rainy Image Synthesis: Learning from Rendering"
* Project Page: https://kb824999404.github.io/HRIG/
* Paper: https://arxiv.org/abs/2502.16421
* Code: https://github.com/kb824999404/HRIG
<table>
<tr>
<td style="padding: 0;width=30%;"><img src="Imgs/lane/lane (1).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/lane/lane (2).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/lane/lane (3).jpg" /></td>
</tr>
<tr>
<td style="padding: 0;width=30%;"><img src="Imgs/lane/lane (4).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/lane/lane (5).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/lane/lane (6).jpg" /></td>
</tr>
<tr>
<td style="padding: 0;width=30%;"><img src="Imgs/citystreet/citystreet (1).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/citystreet/citystreet (2).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/citystreet/citystreet (3).jpg" /></td>
</tr>
<tr>
<td style="padding: 0;width=30%;"><img src="Imgs/citystreet/citystreet (4).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/citystreet/citystreet (5).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/citystreet/citystreet (6).jpg" /></td>
</tr>
<tr>
<td style="padding: 0;width=30%;"><img src="Imgs/japanesestreet/japanese (1).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/japanesestreet/japanese (2).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/japanesestreet/japanese (3).jpg" /></td>
</tr>
<tr>
<td style="padding: 0;width=30%;"><img src="Imgs/japanesestreet/japanese (4).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/japanesestreet/japanese (5).jpg" /></td>
<td style="padding: 0;width=30%;"><img src="Imgs/japanesestreet/japanese (6).jpg" /></td>
</tr>
</table>
## HRI Dataset
The High-resolution Rainy Image (HRI) dataset in the rendering stage.
<table style="text-align: center;">
<tr>
<th>scene</th>
<th>dataset type</th>
<th>resolution</th>
<th>viewpoints</th>
<th>moments</th>
<th>intensities</th>
<th>image pairs</th>
</tr>
<tr>
<td style="vertical-align: middle;" rowspan="2">lane</td>
<td>training set</td>
<td style="vertical-align: middle;" rowspan="2">2048×1024</td>
<td>3</td>
<td style="vertical-align: middle;" rowspan="2">100</td>
<td style="vertical-align: middle;" rowspan="2">4</td>
<td>1200</td>
</tr>
<tr>
<td>test set</td>
<td>1</td>
<td>400</td>
</tr>
<tr>
<td style="vertical-align: middle;" rowspan="2">citystreet</td>
<td>training set</td>
<td style="vertical-align: middle;" rowspan="2">2048×1024</td>
<td>5</td>
<td style="vertical-align: middle;" rowspan="2">25</td>
<td style="vertical-align: middle;" rowspan="2">4</td>
<td>500</td>
</tr>
<tr>
<td>test set</td>
<td>1</td>
<td>100</td>
</tr>
<tr>
<td style="vertical-align: middle;" rowspan="2">japanesestreet</td>
<td>training set</td>
<td style="vertical-align: middle;" rowspan="2">2048×1024</td>
<td>8</td>
<td style="vertical-align: middle;" rowspan="2">25</td>
<td style="vertical-align: middle;" rowspan="2">4</td>
<td>800</td>
</tr>
<tr>
<td>test set</td>
<td>2</td>
<td>200</td>
</tr>
</table>
* `clean`: background RGB images and depth images of all scenes.
* `rainy`: rain layer images, RGB rainy images and depth rainy images of all scenes.
* `trainset.json`: the sample lists of the training set.
* `testset.json`: the sample lists of the test set.
* For each sample in the training set and the test set:
* `scene`: the scene name
* `sequence`: the viewpoint name
* `intensity`: the rain intensity
* `wind`: the wind direction( all zero for the HRI dataset)
* `background`: the path of the background RGB image
* `depth`: the path of the background depth image
* `rain_layer`: the path of the rain layer image
* `rainy_depth`: the path of the rainy depth image
* `rainy_image`: the path of the rainy RGB image
## BlenderFiles
The Blender files for rendering RGB and depth images of all viewpoints are included in the directory of each scene.
## Rain streak database
The Rain streak database from the paper [Rain Rendering for Evaluating and Improving Robustness to Bad Weather](https://github.com/astra-vision/rain-rendering). |
AbdallahhSaleh/Wiki_tokenized | AbdallahhSaleh | "2025-02-25T14:30:09Z" | 2 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T13:43:57Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 2491689720.0
num_examples: 3194474
download_size: 803543391
dataset_size: 2491689720.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jayan12k/Finecode | jayan12k | "2025-02-25T17:28:50Z" | 2 | 1 | [
"task_categories:text-generation",
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2025-02-25T13:54:40Z" | ---
license: mit
task_categories:
- text-generation
---
# FineCode: A High-Quality Code Dataset
Disclaimer: No big files uploaded...yet
The one upload is simply an example format and doesn't contain all the highest quality code or the final version.
## Overview
FineCode is a meticulously curated dataset aimed at providing high-quality code for training and benchmarking code generation models. While many code datasets exist on Hugging Face, the quality of code varies significantly. FineCode seeks to address this by rigorously filtering and scoring code to ensure only well-structured, optimized, and readable examples are included.
## Dataset Details
- **Total Size:** 100B tokens
- **Languages Covered:** Top 15 programming languages, with an emphasis on the top 5
- **Source:** Scraped from top-ranked GitHub repositories of well-known organizations and highly rated open-source projects
- **Filtering Criteria:**
- Code is evaluated using **Llama3.2-3B**, which assigns a quality score (0-100) based on factors like readability, optimization, and best practices
- Only code with a **score of 75 or higher** is included in the dataset
- Additional filtering techniques are applied to remove low-quality or redundant content
## Availability
The dataset will be released soon on Hugging Face, along with the code used for data collection and filtering, allowing full transparency and reproducibility.
Stay tuned for updates!
|
zoujunyi/huatuo | zoujunyi | "2025-02-26T00:38:07Z" | 2 | 0 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T14:18:45Z" | ---
license: apache-2.0
---
|
tttx/8k-forcing-clipped-022225-step3-collated | tttx | "2025-02-25T14:22:53Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T14:22:50Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 2504988.0
num_examples: 74
- name: test
num_bytes: 38302
num_examples: 1
download_size: 658975
dataset_size: 2543290.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
quasara-io/CoCo | quasara-io | "2025-02-25T14:33:48Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T14:33:45Z" | ---
dataset_info:
features:
- name: Vector_ID
dtype: string
- name: File_Path
dtype: string
- name: Coordinate
sequence: float64
- name: Vector
sequence: float64
splits:
- name: Main_1
num_bytes: 16357929
num_examples: 1761
download_size: 16788816
dataset_size: 16357929
configs:
- config_name: default
data_files:
- split: Main_1
path: data/Main_1-*
---
|
ricdomolm/Big-Math-RL-Verified-Solve-Rate-0.5 | ricdomolm | "2025-02-25T14:41:28Z" | 2 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T14:41:22Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: domain
sequence: string
- name: llama8b_solve_rate
dtype: float64
splits:
- name: train
num_bytes: 41366862.25380492
num_examples: 134965
download_size: 19050861
dataset_size: 41366862.25380492
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
marr-peng-lab/phoenix-dataset | marr-peng-lab | "2025-02-25T19:08:58Z" | 2 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-25T15:13:06Z" | ---
license: apache-2.0
---
|
PJMixers-Dev/Fundus-105K-Formatted-Qwen2.5-Coder-7B-Instruct-Classified | PJMixers-Dev | "2025-02-25T18:50:11Z" | 2 | 0 | [
"language:en",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T15:13:32Z" | ---
language:
- en
---
# System Prompt
```
Your primary purpose is classifying text. There are multiple options to choose from: "News", "Review", "Opinion", "Advertising", and "Other".
"News" means the text is reporting on current events in a factual and objective manner.
"Review" means the text evaluates or critiques a product, service, or creative work.
"Opinion" means the text presents a personal viewpoint or argument on a topic.
"Advertising" means the text is designed to promote or sell a product, service, or idea.
"Other" means the text is random, irrelevant, nonsensical, or spam-like information that does not fit into the other categories.
You must reply with a few sentences analyzing the input text on the first line, followed by the classification on the second line (without quotes). Do not provide any other text.
```
# Python Script
```py
import requests
import json
from tqdm import tqdm
from datasets import load_dataset
import pandas as pd
def get_token_count(string):
tokencount_response = (
requests.post(
f"http://localhost:5001/api/extra/tokencount",
json={"prompt": string},
).json()["value"]
)
return tokencount_response
def verify_response_text(text):
lines = [line.strip() for line in text.strip().splitlines() if line.strip()]
if len(lines) != 2:
return False
_, second_line = lines
return second_line.lower() in {"news", "review", "opinion", "advertising", "other"}
def create_response(message_log, generation_settings):
while True:
generation_response = requests.post(
"http://localhost:5001/api/v1/generate",
json={
"prompt": message_log,
**generation_settings
}
).json()["results"][0]["text"].strip()
stop_reason = requests.get("http://127.0.0.1:5001/api/extra/perf").json()["stop_reason"]
if stop_reason in (1, 2) and verify_response_text(generation_response):
break
else:
print("No valid stop token or proper format generated. Retrying.")
lines = [line.strip() for line in generation_response.splitlines() if line.strip()]
first_line = lines[0]
second_line = lines[1].lower()
return first_line, second_line
system_prompt = (
"Your primary purpose is classifying text. There are multiple options to choose from: \"News\", \"Review\", \"Opinion\", \"Advertising\", and \"Other\".\n\n"
"\"News\" means the text is reporting on current events in a factual and objective manner.\n"
"\"Review\" means the text evaluates or critiques a product, service, or creative work.\n"
"\"Opinion\" means the text presents a personal viewpoint or argument on a topic.\n"
"\"Advertising\" means the text is designed to promote or sell a product, service, or idea.\n"
"\"Other\" means the text is random, irrelevant, nonsensical, or spam-like information that does not fit into the other categories.\n\n"
"You must reply with a few sentences analyzing the input text on the first line, followed by the classification on the second line (without quotes). Do not provide any other text."
)
model_name = "bartowski/Qwen2.5-Coder-7B-Instruct-GGUF/Qwen2.5-Coder-7B-Instruct-Q6_K.gguf"
original_dataset_name = "PJMixers-Dev/Fundus-105K-Formatted"
generation_settings = {
"max_context_length": 16384,
"max_length": 512,
"temperature": 0.3,
"rep_pen": 1.03,
"top_p": 1,
"top_k": 50,
"top_a": 0,
"typical": 1,
"tfs": 1,
"min_p": 0.1,
"rep_pen_range": 512,
"rep_pen_slope": 0.7,
"sampler_order": [6, 5, 0, 1, 3, 4, 2],
"stop_sequence": [
"<|im_start|>",
"<|im_end|>"
]
}
output_file = (
"./downloaded_datasets/converted/PJMixers-Dev_Fundus-105K-Formatted-Qwen2.5-Coder-7B-Instruct-Classified.parquet"
)
output_data_list = []
dataset = load_dataset(original_dataset_name)["train"]
for sample in tqdm(dataset):
prompt = (
f"<|im_start|>system\n"
f"{system_prompt}<|im_end|>\n"
f"<|im_start|>user\n"
f"{'Text Tags: ' + (str(sample['topics']) + '\n\n') if sample['topics'] else ''}"
f"```md\n"
f"{sample['text'].strip()}\n"
f"```<|im_end|>\n"
f"<|im_start|>assistant\n"
)
if get_token_count(prompt) > generation_settings["max_context_length"] - generation_settings["max_length"]:
print("Too long. Skipping")
continue
analysis, classification = create_response(
prompt,
generation_settings
)
output_data_list.append(
{
"original_dataset_name": original_dataset_name,
"model_name": model_name,
"generation_settings": generation_settings,
"analysis": analysis,
"classification": classification,
"topics": sample["topics"],
"text": sample["text"]
}
)
if len(output_data_list) != 0 and len(output_data_list) % 10 == 0:
df = pd.DataFrame(output_data_list)
df.to_parquet(
output_file,
index=False,
compression="brotli"
)
df = None
del df
df = pd.DataFrame(output_data_list)
df.to_parquet(
output_file,
index=False,
compression="brotli"
)
```
|
tttx/3k-forcing-clipped-022225-step6-collated | tttx | "2025-02-25T15:46:40Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T15:46:37Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 7188212.0
num_examples: 333
- name: test
num_bytes: 19821
num_examples: 1
download_size: 1931724
dataset_size: 7208033.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
tttx/8k-forcing-clipped-022225-step4-collated | tttx | "2025-02-25T15:56:45Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T15:56:43Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 1121943.0
num_examples: 34
- name: test
num_bytes: 27182
num_examples: 1
download_size: 321243
dataset_size: 1149125.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
SamsungSAILMontreal/Conjugated-xTB_2M_molecules | SamsungSAILMontreal | "2025-02-25T16:18:58Z" | 2 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2502.14842",
"region:us"
] | null | "2025-02-25T16:03:19Z" | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: reward
dtype: float64
- name: wavelength
dtype: float64
- name: f_osc
dtype: float64
- name: molecule
dtype: string
- name: top_score
dtype: float64
splits:
- name: train
num_bytes: 513283807
num_examples: 2900000
download_size: 295719034
dataset_size: 513283807
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Conjugated-xTB dataset of 2M OLED molecules from the paper arxiv.org/abs/2502.14842.
'f_osc' is the oscillator strength (correlated with brightness) and should be maximized to obtain bright OLEDs.
'wavelength' is the absorption wavelength, >=1000nm corresponds to the short-wave infrared absorption range, which is crucial for biomedical imaging as tissues exhibit relatively low absorption and scattering in NIR, allowing for deeper penetration of light.
This is good dataset for training a generative model or RL agent maximizing the oscillator strength.
We also provide code in https://github.com/SamsungSAILMontreal/STGG-AL to evaluate the oscillator strength and wavelength of new molecules.
<img src="https://raw.githubusercontent.com/SamsungSAILMontreal/STGG-AL/master/resource/ir_fosc.png" width="800">
Loading the dataset:
```python
from datasets import load_dataset
dataset = load_dataset('SamsungSAILMontreal/Conjugated-xTB_2M_molecules')
``` |
tyrealqian/TGL_content_classification | tyrealqian | "2025-02-25T18:13:38Z" | 2 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T16:06:08Z" | ---
license: mit
---
|
J1mb0o/dimitris-test-dataset | J1mb0o | "2025-02-25T16:27:45Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T16:13:45Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: flickr_id
dtype: string
- name: hypothesis
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': contradiction
'1': neutral
'2': entailment
splits:
- name: train
num_bytes: 478791.0
num_examples: 6
- name: dev
num_bytes: 478791.0
num_examples: 6
download_size: 164330
dataset_size: 957582.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
|
matteanedda/milan_cultural_knowledge | matteanedda | "2025-02-25T16:15:05Z" | 2 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T16:14:54Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 12439321
num_examples: 20000
download_size: 3686601
dataset_size: 12439321
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sunamdham/jenny-tts-tags-6h-v1 | sunamdham | "2025-02-25T16:16:14Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T16:16:10Z" | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: text
dtype: string
- name: transcription_normalised
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
splits:
- name: train
num_bytes: 1640896
num_examples: 4000
download_size: 1041762
dataset_size: 1640896
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sunamdham/jenny-tts-text-tags-6h-v1 | sunamdham | "2025-02-25T16:16:58Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T16:16:54Z" | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: text
dtype: string
- name: transcription_normalised
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
splits:
- name: train
num_bytes: 2063542
num_examples: 4000
download_size: 1025292
dataset_size: 2063542
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
infinite-dataset-hub/EmployeeFeedbackMatrix | infinite-dataset-hub | "2025-02-25T17:13:38Z" | 2 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | null | "2025-02-25T17:13:37Z" | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# EmployeeFeedbackMatrix
tags: sentiment analysis, workplace satisfaction, performance metrics
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'EmployeeFeedbackMatrix' dataset comprises various textual feedback given by employees about their workplace experiences. This dataset is designed to support sentiment analysis and workplace satisfaction studies. The 'labels' column assigns sentiment scores to the feedback, indicating positive, neutral, or negative sentiment.
**CSV Content Preview:**
```csv
EmployeeID,Feedback,Label
001,I really enjoy the collaborative environment here. Great team spirit. Positive
002,The workload is quite overwhelming and often extends beyond regular hours. Neutral
003,Management needs to improve their communication. There's a lack of transparency. Negative
004,The new project management tool has significantly streamlined our workflow. Very happy with it. Positive
005,Our team could benefit from more regular training sessions. Neutral
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'employee reviews':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=employee+reviews&dataset=EmployeeFeedbackMatrix&tags=sentiment+analysis,+workplace+satisfaction,+performance+metrics
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
tttx/8k-forcing-clipped-022225-step5-collated | tttx | "2025-02-25T17:17:50Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T17:17:47Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 672104.0
num_examples: 22
- name: test
num_bytes: 16587
num_examples: 1
download_size: 203415
dataset_size: 688691.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
nivektk/math-augmented-dataset | nivektk | "2025-02-25T17:32:49Z" | 2 | 0 | [
"task_categories:question-answering",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:MATH (Dan Hendrycks)",
"language:en",
"license:gpl-2.0",
"size_categories:n<1K",
"modality:text",
"arxiv:2103.03874",
"region:us"
] | [
"question-answering",
"text-generation"
] | "2025-02-25T17:23:10Z" | ---
license: gpl-2.0
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: Math-Augmented-Dataset
size_categories:
- 1K<n<10K
source_datasets:
- MATH (Dan Hendrycks)
annotations_creators:
- machine-generated
multilinguality: monolingual
paperswithcode_id: math
homepage: "https://www.kaggle.com/datasets/kevinfabioramoslopez/math-augmented-dataset"
---
# Math-Augmented-Dataset
## Dataset Description
The **Math-Augmented-Dataset** extends the MATH dataset by Dan Hendrycks, focusing on algebra problems. It comprises **1,006 validated examples** from the algebra subset, structured in JSON format with detailed step-by-step solutions generated using Large Language Models (LLMs) with chain-of-thought reasoning.
### Dataset Structure
Each JSON file contains:
- **problem**: The math problem statement, including LaTeX expressions.
- **level**: Difficulty level (1-5, with 5 being the hardest).
- **type**: Mathematical domain (e.g., algebra, geometry).
- **solution**: Step-by-step solution in English.
### Dataset Creation
The dataset was augmented using **LLMs** to generate structured explanations. A validation process ensured that the solutions were logically consistent and mathematically correct.
## Citation
If you use this dataset, please cite:
```
@article{hendrycks2021measuring,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Hendrycks, Dan and Burns, Collin and Kadavath, Saurav and Arora, Akul and Basart, Steven and Tang, Eric and Song, Dawn and Steinhardt, Jacob},
journal={arXiv preprint arXiv:2103.03874},
year={2021}
}
```
|
tttx/3k-forcing-clipped-022225-step7-collated | tttx | "2025-02-25T17:25:34Z" | 2 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T17:25:31Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 5817679.0
num_examples: 271
- name: test
num_bytes: 22606
num_examples: 1
download_size: 1559379
dataset_size: 5840285.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
amuvarma/tagged_qa_pair_va | amuvarma | "2025-02-25T17:33:50Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T17:33:06Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1538418.6570527265
num_examples: 3000
download_size: 971800
dataset_size: 1538418.6570527265
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amuvarma/all-tagged-qa-6k | amuvarma | "2025-02-25T17:37:32Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T17:37:31Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2743300.823802716
num_examples: 5895
download_size: 1505779
dataset_size: 2743300.823802716
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amuvarma/all-tagged-qa-6k-proc | amuvarma | "2025-02-25T17:46:26Z" | 2 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T17:46:25Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6887407
num_examples: 5895
download_size: 2241047
dataset_size: 6887407
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MHTrXz/MedcalRagSmall | MHTrXz | "2025-02-25T17:49:40Z" | 2 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-25T17:48:49Z" | ---
dataset_info:
features:
- name: title
dtype: string
- name: content
dtype: string
- name: source
dtype: string
- name: author
dtype: string
- name: references
dtype: string
splits:
- name: train
num_bytes: 807678007
num_examples: 187455
download_size: 192511442
dataset_size: 807678007
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits