Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -94,23 +94,21 @@ configs:
|
|
94 |
path: data/test-*
|
95 |
---
|
96 |
# Dataset Summary
|
97 |
-
SWE-
|
98 |
|
99 |
-
|
100 |
-
The SWE-bench Extra v2 dataset supports the development of software engineering agents capable of autonomously solving GitHub issues. The data collection process, based on the SWE-bench methodology, involves the following steps:
|
101 |
|
102 |
-
|
103 |
-
2. **Filtering**: Instances are filtered based on attributes such as issue descriptions, relevant code paths, and test patches.
|
104 |
-
3. **Automated extracting of project dependencies**: Extracting project dependencies using LLM.
|
105 |
-
4. **Execution-based Validation**: The project environments are set up and tests are run to verify that they execute correctly.
|
106 |
|
107 |
-
|
|
|
|
|
108 |
|
109 |
# How to Use
|
110 |
|
111 |
```python
|
112 |
from datasets import load_dataset
|
113 |
-
ds = load_dataset('nebius/SWE-
|
114 |
```
|
115 |
|
116 |
# Dataset Statistics
|
@@ -149,7 +147,7 @@ The dataset contains the following fields. It includes all fields from SWE-bench
|
|
149 |
| `requirements` | str | Freezed requirements for the repository. |
|
150 |
| `environment` | str | Environment configuration for the repository. |
|
151 |
|
152 |
-
To execute instances within SWE-
|
153 |
|
154 |
# License
|
155 |
The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance.
|
|
|
94 |
path: data/test-*
|
95 |
---
|
96 |
# Dataset Summary
|
97 |
+
SWE-rebench is a large-scale dataset designed to support the training and evaluation of LLM-based software engineering (SWE) agents. It is built using a fully automated pipeline that continuously extracts real-world GitHub tasks at scale. The dataset includes over 21,000 issue–pull request pairs from 6,000+ Python repositories, each validated for correctness through environment setup and test execution.
|
98 |
|
99 |
+
* SWE-rebench expands on the methodology introduced in SWE-bench by adding:
|
|
|
100 |
|
101 |
+
* Continuous task collection to prevent benchmark staleness
|
|
|
|
|
|
|
102 |
|
103 |
+
* Decontamination mechanisms to mitigate data leakage into pretrained LLMs
|
104 |
+
|
105 |
+
* Automatic environment extraction and validation to ensure high-quality execution
|
106 |
|
107 |
# How to Use
|
108 |
|
109 |
```python
|
110 |
from datasets import load_dataset
|
111 |
+
ds = load_dataset('nebius/SWE-rebench')
|
112 |
```
|
113 |
|
114 |
# Dataset Statistics
|
|
|
147 |
| `requirements` | str | Freezed requirements for the repository. |
|
148 |
| `environment` | str | Environment configuration for the repository. |
|
149 |
|
150 |
+
To execute instances within SWE-rebench use this fork of SWE-bench [SWE-rebench/SWE-bench-fork](https://github.com/SWE-rebench/SWE-bench-fork)
|
151 |
|
152 |
# License
|
153 |
The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance.
|