Datasets:

Tasks:
Other
Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
ibragim-bad commited on
Commit
f2754cb
·
verified ·
1 Parent(s): d4ff385

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -10
README.md CHANGED
@@ -94,23 +94,21 @@ configs:
94
  path: data/test-*
95
  ---
96
  # Dataset Summary
97
- SWE-bench Extra V2 is a dataset that can be used to train or evaluate agentic systems specializing in resolving GitHub issues. It is based on the methodology used to build SWE-bench benchmark and our previous dataset SWE-bench Extra and includes 21,000 Issue-Pull Request pairs sourced from 6,000 Python repositories.
98
 
99
- # Dataset Description
100
- The SWE-bench Extra v2 dataset supports the development of software engineering agents capable of autonomously solving GitHub issues. The data collection process, based on the SWE-bench methodology, involves the following steps:
101
 
102
- 1. **Issue and Pull Request Collection**: Issues are gathered and linked with pull requests that successfully resolve them.
103
- 2. **Filtering**: Instances are filtered based on attributes such as issue descriptions, relevant code paths, and test patches.
104
- 3. **Automated extracting of project dependencies**: Extracting project dependencies using LLM.
105
- 4. **Execution-based Validation**: The project environments are set up and tests are run to verify that they execute correctly.
106
 
107
- For a more detailed description of the data collection process, please refer to our blog post (TBD) [Scaling data collection for training software engineering agents](https://nebius.com/blog/posts/scaling-data-collection-for-training-swe-agents).
 
 
108
 
109
  # How to Use
110
 
111
  ```python
112
  from datasets import load_dataset
113
- ds = load_dataset('nebius/SWE-bench-extra-v2')
114
  ```
115
 
116
  # Dataset Statistics
@@ -149,7 +147,7 @@ The dataset contains the following fields. It includes all fields from SWE-bench
149
  | `requirements` | str | Freezed requirements for the repository. |
150
  | `environment` | str | Environment configuration for the repository. |
151
 
152
- To execute instances within SWE-bench, you need to provide a default recipe for dependency installation. The constants required for running these instances are described in this [constants.py](https://huggingface.co/datasets/nebius/SWE-bench-extra/blob/main/constants.py).
153
 
154
  # License
155
  The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance.
 
94
  path: data/test-*
95
  ---
96
  # Dataset Summary
97
+ SWE-rebench is a large-scale dataset designed to support the training and evaluation of LLM-based software engineering (SWE) agents. It is built using a fully automated pipeline that continuously extracts real-world GitHub tasks at scale. The dataset includes over 21,000 issue–pull request pairs from 6,000+ Python repositories, each validated for correctness through environment setup and test execution.
98
 
99
+ * SWE-rebench expands on the methodology introduced in SWE-bench by adding:
 
100
 
101
+ * Continuous task collection to prevent benchmark staleness
 
 
 
102
 
103
+ * Decontamination mechanisms to mitigate data leakage into pretrained LLMs
104
+
105
+ * Automatic environment extraction and validation to ensure high-quality execution
106
 
107
  # How to Use
108
 
109
  ```python
110
  from datasets import load_dataset
111
+ ds = load_dataset('nebius/SWE-rebench')
112
  ```
113
 
114
  # Dataset Statistics
 
147
  | `requirements` | str | Freezed requirements for the repository. |
148
  | `environment` | str | Environment configuration for the repository. |
149
 
150
+ To execute instances within SWE-rebench use this fork of SWE-bench [SWE-rebench/SWE-bench-fork](https://github.com/SWE-rebench/SWE-bench-fork)
151
 
152
  # License
153
  The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance.