nielsr HF Staff commited on
Commit
a66ba9e
·
verified ·
1 Parent(s): 4a5e815

Enhance dataset card: update metadata and add sample usage

Browse files

This PR enhances the GitTaskBench dataset card by:

- **Updating `task_categories`**: The `task_categories` in the metadata have been broadened from `question-answering` to accurately reflect the multi-modal nature of the benchmark, including `text-generation`, `image-to-image`, `image-to-video`, `automatic-speech-recognition`, and `text-retrieval`.
- **Adding `tags`**: New tags `benchmark`, `code-agent`, `software-engineering`, and `multimodal` have been added to improve discoverability and better describe the dataset.
- **Adding Hugging Face Paper Link**: A direct link to the Hugging Face paper page (`https://huggingface.co/papers/2508.18993`) has been added at the top for improved visibility on the Hub.
- **Replacing License Placeholder**: The placeholder license text in the markdown content (`[Specify license chosen, e.g., `cc-by-nc-sa-4.0`]`) has been replaced with the explicit `cc-by-nc-sa-4.0` license for clarity.
- **Enhancing "Usage Example"**: The dataset card now includes detailed command-line snippets from the GitHub repository, showing how to set up the environment, run single or all tasks, and analyze results, providing immediate practical guidance for users.

Files changed (1) hide show
  1. README.md +73 -8
README.md CHANGED
@@ -1,18 +1,28 @@
1
  ---
2
- license: cc-by-nc-sa-4.0
3
- task_categories:
4
- - question-answering
5
  language:
6
  - zh
7
  - en
8
- tags:
9
- - agent
10
  size_categories:
11
  - n<1K
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  # Dataset Card for **GitTaskBench**
15
 
 
 
16
  ## Dataset Details
17
 
18
  ### Dataset Description
@@ -23,7 +33,7 @@ It contains **54 representative tasks** across **7 domains**, carefully curated
23
  - **Funded by [optional]:** Not specified
24
  - **Shared by [optional]:** GitTaskBench Team
25
  - **Language(s):** Primarily English (task descriptions, documentation)
26
- - **License:** [Specify license chosen, e.g., `cc-by-nc-sa-4.0`]
27
 
28
  ### Dataset Sources
29
  - **Repository:** [GitTaskBench GitHub](https://github.com/QuantaAlpha/GitTaskBench)
@@ -67,8 +77,63 @@ Each task specifies:
67
 
68
  ## Usage Example
69
 
70
- See GitHub Repository:** [GitTaskBench GitHub](https://github.com/QuantaAlpha/GitTaskBench)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  Each task entry contains:
74
  - **task_id**: Unique task identifier (e.g., `Trafilatura_01`)
@@ -145,4 +210,4 @@ If you use GitTaskBench, please cite the paper:
145
  - Multi-modal tasks (vision, speech, text, signals).
146
  - Repository-level evaluation.
147
  - Real-world relevance (PDF extraction, video coloring, speech analysis, etc.).
148
- - Extensible design for new tasks.
 
1
  ---
 
 
 
2
  language:
3
  - zh
4
  - en
5
+ license: cc-by-nc-sa-4.0
 
6
  size_categories:
7
  - n<1K
8
+ task_categories:
9
+ - text-generation
10
+ - image-to-image
11
+ - image-to-video
12
+ - automatic-speech-recognition
13
+ - text-retrieval
14
+ tags:
15
+ - agent
16
+ - benchmark
17
+ - code-agent
18
+ - software-engineering
19
+ - multimodal
20
  ---
21
 
22
  # Dataset Card for **GitTaskBench**
23
 
24
+ The dataset was presented in the paper [GitTaskBench: A Benchmark for Code Agents Solving Real-World Tasks Through Code Repository Leveraging](https://huggingface.co/papers/2508.18993).
25
+
26
  ## Dataset Details
27
 
28
  ### Dataset Description
 
33
  - **Funded by [optional]:** Not specified
34
  - **Shared by [optional]:** GitTaskBench Team
35
  - **Language(s):** Primarily English (task descriptions, documentation)
36
+ - **License:** `cc-by-nc-sa-4.0`
37
 
38
  ### Dataset Sources
39
  - **Repository:** [GitTaskBench GitHub](https://github.com/QuantaAlpha/GitTaskBench)
 
77
 
78
  ## Usage Example
79
 
80
+ To get started with GitTaskBench, follow these steps for environment setup and evaluation.
81
+
82
+ ### 1. Set Up ⚙️
83
+ First, create a new conda environment:
84
+ ```console
85
+ conda create -n gittaskbench python=3.10 -y
86
+ conda activate gittaskbench
87
+
88
+ pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 \
89
+ --extra-index-url https://download.pytorch.org/whl/cu113
90
+ ```
91
+
92
+ Then, you can install `gittaskbench` with pip:
93
+ ```console
94
+ git clone https://github.com/QuantaAlpha/GitTaskBench.git
95
+ cd GitTaskBench
96
+ # config
97
+ pip install -e .
98
+ ```
99
+ Alternatively:
100
+ ```console
101
+ # config
102
+ pip install -r requirements.txt
103
+ ```
104
+
105
+ ### 2. Quick Start 💡
106
+
107
+ * #### **Single Task Evaluation:**
108
+
109
+ If you need to evaluate a single, specific task, you can use the following command. The example below shows how to evaluate the `Trafilatura_01` task:
110
+
111
+ ```console
112
+ cd GitTaskBench
113
+ # The outputs are saved in the DEFAULT "./output" directory, for example: "./output/Trafilatura_01/output.txt"
114
+ ```
115
+
116
+ ```console
117
+ gittaskbench grade --taskid Trafilatura_01
118
+ ```
119
+
120
+ Running the command will produce an analysis report (.jsonl) at the DEFAULT path (./test_results/Trafilatura_01). See ```test_results_for_show/``` for a sample.
121
 
122
+ The complete commands can be found in the [🤖 Automation Evaluation](#automation-evaluation) section.
123
+
124
+ * #### **All Tasks Evaluation**
125
+ When you need to evaluate all tasks, you can use the --all parameter. This command will automatically iterate through and execute the evaluation of all tasks:
126
+ ```console
127
+ gittaskbench grade --all
128
+ ```
129
+
130
+ * #### **Test Results Analysis**
131
+ After completing the evaluation, if you want to analyze & summary the test results, you can use the statistics command. This command will analyze & summary the evaluation results in the specified directory and output an analysis report (.txt):
132
+
133
+ ```console
134
+ gittaskbench eval
135
+ ```
136
+ See ```test_reports/``` for a sample.
137
 
138
  Each task entry contains:
139
  - **task_id**: Unique task identifier (e.g., `Trafilatura_01`)
 
210
  - Multi-modal tasks (vision, speech, text, signals).
211
  - Repository-level evaluation.
212
  - Real-world relevance (PDF extraction, video coloring, speech analysis, etc.).
213
+ - Extensible design for new tasks.