Datasets:

Modalities:
Tabular
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
NeoRL2 / README.md
xionghuichen's picture
Update README.md
d528956 verified
metadata
license: apache-2.0
task_categories:
  - reinforcement-learning
language:
  - en
tags:
  - offlinerl
pretty_name: neorl
size_categories:
  - 100M<n<1B
configs:
  - config_name: DMSD
    data_files:
      - split: train
        path: DMSD/train/*.parquet
      - split: val
        path: DMSD/val/*.parquet
  - config_name: Fusion
    data_files:
      - split: train
        path: Fusion/train/*.parquet
      - split: val
        path: Fusion/val/*.parquet
  - config_name: Pipeline
    data_files:
      - split: train
        path: Pipeline/train/*.parquet
      - split: val
        path: Pipeline/val/*.parquet
  - config_name: RandomFrictionHopper
    data_files:
      - split: train
        path: RandomFrictionHopper/train/*.parquet
      - split: val
        path: RandomFrictionHopper/val/*.parquet
  - config_name: RocketRecovery
    data_files:
      - split: train
        path: RocketRecovery/train/*.parquet
      - split: val
        path: RocketRecovery/val/*.parquet
  - config_name: SafetyHalfCheetah
    data_files:
      - split: train
        path: SafetyHalfCheetah/train/*.parquet
      - split: val
        path: SafetyHalfCheetah/val/*.parquet
  - config_name: Salespromotion
    data_files:
      - split: train
        path: Salespromotion/train/*.parquet
      - split: val
        path: Salespromotion/val/*.parquet
  - config_name: Simglucose
    data_files:
      - split: train
        path: Simglucose/train/*.parquet
      - split: val
        path: Simglucose/val/*.parquet
  - config_name: Simglucose-high
    data_files:
      - split: train
        path: Simglucose-high/train/*.parquet
      - split: val
        path: Simglucose-high/val/*.parquet

Dataset Card for NeoRL‑2: Near Real‑World Benchmarks for Offline Reinforcement Learning

Dataset Summary

NeoRL-2 is a collection of seven near–real-world offline-RL datasets plus their evaluation simulators. This repo we provide the offline-RL dataset, while the simulators are in https://github.com/polixir/NeoRL2.

Each task injects one or more realistic challenges—delays, exogenous disturbances, global safety constraints, traditional rule-based data, and/or severe data scarcity—into a lightweight control environment.


Dataset Details

Challenge Brief description Appears in
Delay Long & variable observation-to-effect latency Pipeline, Simglucose
External factors State variables the agent cannot influence (e.g. wind, ground-friction) RocketRecovery, RandomFrictionHopper, Simglucose
Global safety constraints Hard limits that must never be violated SafetyHalfCheetah
Rule-based behaviour policy Trajectories from a PID or other deterministic controller DMSD
Severely limited data Tiny datasets reflecting expensive experimentation Fusion, RocketRecovery, SafetyHalfCheetah
  • Curated by: Polixir Technologies
  • Paper: Gao et al. “NeoRL-2: Near Real-World Benchmarks for Offline Reinforcement Learning with Extended Realistic Scenarios”, arXiv:2503.19267 (2025)
  • Repository (the environments for the datasets are in here): https://github.com/polixir/NeoRL2
  • Task: offline / batch reinforcement learning

Uses

Direct Use

  • Benchmarking offline-RL algorithms under near-deployment conditions
  • Studying robustness to delays, safety limits, exogenous disturbances and data scarcity
  • Developing data-efficient model-based or model-free methods able to outperform conservative behaviour policies

Loading example

from datasets import load_dataset

dmsd = load_dataset("polixir/neorl2", "DMSD", split="train")
state, action, reward, next_state, done = dmsd[0].values()

Out-of-Scope Use

  • Online RL with unlimited interaction
  • Safety-critical decision-making without extensive validation on the real system

Dataset Structure

Each Parquet row contains

Key Type Description
observations float32[] Raw observation vector (dim varies per task)
actions float32[] Continuous action taken by the behaviour policy
rewards float32 Scalar reward
next_observations float32[] Observation at the next timestep
terminals bool True if episode ended (termination or safety)

Typical dataset sizes are ≈100 k transitions; Fusion, RocketRecovery and SafetyHalfCheetah are smaller by design.


Baseline Benchmark

Normalised return (0 – 100)

Task Data BC CQL EDAC MCQ TD3BC MOPO COMBO RAMBO MOBILE
Pipeline 69.25 68.6 ± 13.4 81.1 ± 8.3 72.9 ± 4.6 49.7 ± 7.4 81.9 ± 7.5 −26.3 ± 92.7 55.5 ± 4.3 24.1 ± 74.4 65.5 ± 4.1
Simglucose 73.9 75.1 ± 0.7 11.0 ± 3.4 8.1 ± 0.3 29.6 ± 5.7 74.2 ± 0.4 34.6 ± 28.1 23.2 ± 2.5 10.8 ± 0.9 9.3 ± 0.2
RocketRecovery 75.3 72.8 ± 2.5 74.3 ± 1.4 65.7 ± 9.8 76.5 ± 0.8 79.7 ± 0.9 −27.7 ± 105.6 74.7 ± 0.7 −44.2 ± 263.0 43.7 ± 17.5
RandomFrictionHopper 28.7 28.0 ± 0.3 33.0 ± 1.2 34.7 ± 1.3 31.7 ± 1.3 29.5 ± 0.7 32.5 ± 5.8 34.1 ± 4.7 29.6 ± 7.2 35.1 ± 0.5
DMSD 56.6 65.1 ± 1.6 70.2 ± 1.1 78.7 ± 2.3 77.8 ± 1.2 60.0 ± 0.8 68.2 ± 0.7 68.3 ± 0.4 76.2 ± 1.9 64.4 ± 0.8
Fusion 48.8 55.2 ± 0.3 55.9 ± 1.9 58.0 ± 0.7 49.7 ± 1.1 54.6 ± 0.8 −11.6 ± 22.2 55.5 ± 0.3 59.6 ± 5.0 5.0 ± 7.1
SafetyHalfCheetah 73.6 70.2 ± 0.4 71.2 ± 0.6 53.1 ± 11.1 54.7 ± 4.3 68.6 ± 0.4 23.7 ± 24.3 57.8 ± 13.3 −422.4 ± 307.5 8.7 ± 3.9

How often do algorithms beat the behaviour policy?

Margin BC CQL EDAC MCQ TD3BC MOPO COMBO RAMBO MOBILE
≥ 0 3 4 4 4 6 2 3 3 2
≥ +3 2 4 4 2 4 2 3 2 2
≥ +5 2 3 3 1 2 1 3 2 2
≥ +10 0 2 1 1 1 1 1 2 0

Key conclusions

  • No baseline “solves” any task (score ≥ 95). Best result is TD3BC’s 81.9 on Pipeline.
  • TD3BC is the most reliable algorithm, surpassing the data in 6 / 7 tasks and still leading at stricter margins.
  • Model-based methods (MOPO, RAMBO, and MOBILE) are brittle, with large variance and occasional catastrophic divergence.
  • DMSD is easiest: many algorithms exceed the behaviour policy by 20 + points thanks to simple PID data.
  • SafetyHalfCheetah is hardest: every method trails the data due to strict safety penalties and limited samples.
  • In general, model-free approaches show smaller error bars than model-based ones, underlining the challenge of learning accurate dynamics under delay, disturbance and scarcity.

Citation

@misc{gao2025neorl2,
  title   = {NeoRL-2: Near Real-World Benchmarks for Offline Reinforcement Learning with Extended Realistic Scenarios},
  author  = {Songyi Gao and Zuolin Tu and Rong-Jun Qin and Yi-Hao Sun and Xiong-Hui Chen and Yang Yu},
  year    = {2025},
  eprint  = {2503.19267},
  archivePrefix = {arXiv},
  primaryClass = {cs.LG}
}

Contact

Questions or bug reports? Please open an issue on the NeoRL-2 GitHub repo.