Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
File Number: string
Title: string
abstractText: string
1 Introduction: string
2 Related works: string
3 Proposed Approach: string
3.1 Dataset: string
3.2 Training details: string
3.3 Data Augmentations: string
4 Results: string
4.1 Using brand names as data: string
4.2 Comparison with T5: string
4.3 Pre-training on large query: string
5 Knowledge distillation for improved latency: string
6 Conclusion: string
vs
File Number: string
Title: string
abstractText: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              File Number: string
              Title: string
              abstractText: string
              1 Introduction: string
              2 Related works: string
              3 Proposed Approach: string
              3.1 Dataset: string
              3.2 Training details: string
              3.3 Data Augmentations: string
              4 Results: string
              4.1 Using brand names as data: string
              4.2 Comparison with T5: string
              4.3 Pre-training on large query: string
              5 Knowledge distillation for improved latency: string
              6 Conclusion: string
              vs
              File Number: string
              Title: string
              abstractText: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

overview: | Limitation_dataset_BAGELS is a structured corpus of JSON files drawn from ACL 2023 (3,013 papers), ACL 2024 (2,727 papers), and NeurIPS 2021–2022 (7,069 papers). Each record includes title, abstract, and sectionized full text (e.g., Introduction, Related Work, Methodology, Results/Experiments, etc.). As ground truth, ACL 2023 and ACL 2024 contain only author-mentioned limitations, while NeurIPS 2021–2022 contains both author-mentioned limitations and OpenReview-derived reviewer signals. Counts by label: ACL 2023 (2,558 with author limitation, 455 without), ACL 2024 (2,440 with author limitation, 287 without), NeurIPS 2021–2022 (2,830 with author limitation and/or OpenReview signals, 4,239 without). The dataset supports limitations detection, span extraction/summarization, retrieval & QA over scholarly articles, and alignment analyses between author-stated limitations and reviewer feedback.

## Dataset at a glance

| Subset              | # Papers | With ground truth | Without ground truth | Ground-truth definition                                                       |
|---------------------|---------:|------------------:|---------------------:|-------------------------------------------------------------------------------|
| ACL 2023            | 3,013    | 2,558             | 455                  | Author-mentioned limitation (Limitation) present.                                          |
| ACL 2024            | 2,727    | 2,440             | 287                  | Author-mentioned limitation (Limitation) present.                                          |
| NeurIPS 2021–2022   | 7,069    | 2,830             | 4,239                | Author-mentioned limitation (Limitations Refined) and OpenReview-derived reviewer comment (Reviewer Comment). |
| **Total**           | **12,809** | **7,828**       | **4,981**           |                                                                               |

Schema

ACL 2023 & ACL 2024

  • File Number (string)
  • Title (string)
  • Limitation (string) (Author mentioned Limitation, using for ground truth)
  • abstractText (string)
  • Section keys (strings), e.g.: "1 Introduction", "2 Related Work", "3 Methodology", "Results and Experiments", "Data", "Other sections"

NeurIPS 2021–2022

  • File Number (string)
  • Title (string)
  • Limitation (string) (Author mentioned Limitation)
  • Limitation Refined (string) (Author mentioned Limitation after removing noisy sentences from other sections, using for ground truth)
  • Reviewer Comment (string) — concatenation of reviewer limitation excerpts, formatted per reviewer, using for ground truth
  • Reviewer Summary (string) — concatenation of reviewer summaries, formatted per reviewer
  • abstractText (string)
  • Section keys (strings), e.g.: "1 Introduction", "2 Related Work", "3 Methodology", "Results and Experiments", "Data", "Other sections"
  • Author mentioned Limitation (string) — extracted span(s)

pipeline: Step 1: "Ground Truth Extraction Pipeline" description: | We parse each paper with ScienceParse to recover structured sections (title, abstract, and all headings/body text), and we collect peer-review content from OpenReview using a Selenium scraper. For Limitations extraction, we first look for a dedicated section whose heading contains “Limitation” or “Limitations” and take that section verbatim. If no such section exists, we scan the paper (except Abstract, Introduction, and Related Work sections) for the first sentence containing “limitation”/“limitations” (case-insensitive) and extract text from that sentence onward, but stop as soon as we encounter a boundary keyword to avoid unrelated material. The boundary keywords we use are: ethics, ethical statement, discussion/discussions, conclusion, grant, and appendix. This simple heuristic keeps the extracted spans focused on genuine limitations while minimizing boilerplate.

Step 2: "Ground Truth Re-Extraction Pipeline (GPT-4o mini)" description: | We standardize limitation signals by running each paper through an extract-only pipeline. First, we take the author-mentioned Limitation text and the Reviewer Comment fields from the JSON. Each source is sent to GPT-4o mini with a strict “no paraphrasing” prompt to return verbatim limitation spans (author → limitations_author_extracted, reviewer → limitations_reviewer_extracted). We then pass both lists to a master GPT-4o mini step that deduplicates near-identical spans. This step also preserves provenance, marking whether a consolidated span came from the author, reviewers, or both. The final merged list is saved as limitations_consolidated. steps: - "Inputs: Author 'Limitation'; Reviewer Comment." - "Author extractor: GPT-4o mini returns verbatim limitation spans with source='author'." - "Reviewer extractor: GPT-4o mini returns verbatim limitation spans with source='reviewer'." - "Master consolidation (no generation): deduplicate/merge near-duplicates; pick an existing span; keep provenance." - "Outputs: limitations_author_extracted, limitations_reviewer_extracted, limitations_consolidated."

  • For ACL 'Limitation' ──> GPT-4o mini Extractor ──> limitations_author_extracted

    • limitations_author_extracted (Ground truth limitation)
  • For NeurIPS -'Limitation' ──> GPT-4o mini Extractor ──> limitations_author_extracted

    • 'Reviewer Comment' ──> GPT-4o mini Extractor ──> limitations_reviewer_extracted
    • limitations_author_extracted + limitations_reviewer_extracted ──> GPT-4o mini Merger ──> limitations_consolidated (Ground truth limitation)

Intended uses:

  • "This dataset is useful for text generation, such as generating limitations (or other sections) to evaluate the model-generated text with ground truth." Also, this dataset can be used for
  • "Binary classification: detect whether a paper includes an explicit limitation (author/reviewer)."
  • "Retrieval & QA: retrieve limitation passages given a query (paper, section, topic)."
  • "Author–reviewer alignment: compare author-stated limitations vs reviewer-raised shortcomings."

Suggested metrics:

  • "We highly suggest to use our PointWise Evaluation approach to measure the performance between Ground truth and model generated text. (see the Citation section for paper)"

Other suggested metrics:

  • "ROUGE 1,2,L, BERTScore, BLEU, Cosine Similarity, Jaccard Simlarity"
  • LLM as a Judge (for Coherence, Faithfulness, Readability, Grammar, Overall Performance)
  • "F1 / macro-F1 (classification)"
  • "ROUGE / BERTScore (generation)"
  • "nDCG / MRR (retrieval)"

curation processing notes:

  • "PDFs were parsed and sectionized; headings preserved verbatim (e.g., '1 Introduction')."
  • "Author-side limitation spans prioritized; reviewer-side text aggregates multi-reviewer fields (Reviewer_1, Reviewer_2, …)."
  • "Heuristics avoid false positives (e.g., ignoring sentences that start with prompts like 'Did you …')."

Examples

ACL 2023 / ACL 2024

{
  "File Number": "123",
  "Title": "Example Paper Title",
  "Limitations": "Our study is limited by dataset size and domain coverage ...", (GPT 4o mini is used to get ground truth)
  "abstractText": "We study ...",
  "1 Introduction": " ... ",
  "2 Related Work": " ... ",
  "3 Methodology": " ... "
}
### NeurIPS 20212022
{
  "File Number": "123",
  "Title": "Example Paper Title",
  "Limitations": "Due to the lack of access, a major limitation of our study ...", 
  "Limitations Refined": "Due to the lack of access ...", (GPT 4o mini is used to get ground truth)
  "Reviewer Comment": "Reviewer_2: I totally agree..., Reviewer_3: The work provides....", (GPT 4o mini is used to get ground truth)
  "abstractText": "We study ...",
  "1 Introduction": " ... ",
  "2 Related Work": " ... ",
  "3 Methodology": " ... "
}

Citation

This dataset is related with below work Azher, Ibrahim Al; Mokarrama, Miftahul Jannat; Guo, Zhishuai; Choudhury, Sagnik Ray; Alhoori, Hamed (2025). BAGELS: Benchmarking the Automated Generation and Extraction of Limitations from Scholarly Text. arXiv preprint arXiv:2505.18207.

This work has been accepted at EMNLP 2025 (Findings).

If you use this dataset, please cite:

@article{azher2025bagels,
  title={BAGELS: Benchmarking the Automated Generation and Extraction of Limitations from Scholarly Text},
  author={Azher, Ibrahim Al and Mokarrama, Miftahul Jannat and Guo, Zhishuai and Choudhury, Sagnik Ray and Alhoori, Hamed},
  journal={arXiv preprint arXiv:2505.18207},
  year={2025}
}
Downloads last month
231