CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning

This repository contains the CODA-PLANNER-TARS-32B model, presented in the paper CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning.

Check out our GitHub repository for more implementation details! You can also find the paper on arXiv.

Abstract

Autonomous agents for Graphical User Interfaces (GUIs) face significant challenges in specialized domains such as scientific computing, where both long-horizon planning and precise execution are required. Existing approaches suffer from a trade-off: generalist agents excel at planning but perform poorly in execution, while specialized agents demonstrate the opposite weakness. Recent compositional frameworks attempt to bridge this gap by combining a planner and an actor, but they are typically static and non-trainable, which prevents adaptation from experience. This is a critical limitation given the scarcity of high-quality data in scientific domains. To address these limitations, we introduce CODA, a novel and trainable compositional framework that integrates a generalist planner (Cerebrum) with a specialist executor (Cerebellum), trained via a dedicated two-stage pipeline. In the first stage, Specialization, we apply a decoupled GRPO approach to train an expert planner for each scientific application individually, bootstrapping from a small set of task trajectories. In the second stage, Generalization, we aggregate all successful trajectories from the specialized experts to build a consolidated dataset, which is then used for supervised fine-tuning of the final planner. This equips CODA with both robust execution and cross-domain generalization. Evaluated on four challenging applications from the ScienceBoard benchmark, CODA significantly outperforms baselines and establishes a new state of the art among open-source models.

Features

CODA introduces a novel and trainable compositional framework for GUI agents, designed with the following key features:

  • Dual-Brain Architecture: Integrates a generalist planner (Cerebrum) with a specialist executor (Cerebellum).
  • Decoupled Reinforcement Learning: Employs a dedicated two-stage pipeline (Specialization and Generalization) for training.
  • Robust Execution: Achieves precise execution in specialized scientific computing domains.
  • Cross-Domain Generalization: Demonstrates strong generalization capabilities across various scientific applications.
  • State-of-the-Art Performance: Significantly outperforms baselines on the ScienceBoard benchmark.

Usage

For detailed installation instructions and inference examples, please refer to the official GitHub repository.

Installation

conda create -n coda python=3.11 
conda activate coda
pip install vllm==0.8.5.post1

Inference

Prepare ScienceBoard environment replace sci folder in ScienceBoard with our ScienceBoard_CODA/sci and put qwenvl_test.py under ScienceBoard base folder.

# use conda (vllm==0.8.5.post1) to deploy model to reproduce our results.
# deploy CODA-PLANER-TARS-32B
vllm serve OpenIXCLab/CODA-PLANNER-TARS-32B \
    --served-model-name "qwen32b" \
    --host 0.0.0.0 \
    --port "${PORT_1}" \
    --tensor-parallel-size 4 &

# deploy executor UI-TARS-1.5-7B
CUDA_VISIBLE_DEVICES=4,5 vllm serve ByteDance-Seed/UI-TARS-1.5-7B \
    --served-model-name "tars1.5-grounding" \
    --host 0.0.0.0 \
    --port "${PORT_2}" \
    --tensor-parallel-size 2 &

# in sciboard env, perform agent evaluation.
export SOFTWARE='Celestia'
export SUBFOLDER="planner_ans"
export DEBUG_LOG=0
export SERVER_URL="http://YOUR.PLANER.ADDR:PORT_1/v1/chat/completions" # qwen32b for baseline and coda-1.0-32b for our planner
export EXECUTOR_URL="http://YOUR.EXECUTOR.ADDR:PORT_2" # uitars-1.5 addr
export MODEL_NAME="qwen32b"
export NO_CONTEXT_IMAGE=0
export SPLITE=8
export QWEN_PLANNER=1
export PLANNER_ANS=1

for i in {0..7}; do # parallel for 8 VMs
    export VM_PATH="vmware_vm_data/Ubuntu${i}/Ubuntu${i}.vmx" 
    # Set port based on i value
    export INDEX=$i
    if [ $i -eq 0 ]; then
        # Process i=0: show output in terminal
        timeout 90m python qwenvl_test.py &
    else
        # Process i>0: redirect output to log file
        timeout 90m python qwenvl_test.py > "logs/vm${i}_output.log" 2>&1 &
    fi

    sleep 10s
done
wait
sleep 10s
echo "All tasks completed."

Citation

If you find our work helpful, please consider citing:

@misc{sun2025codacoordinatingcerebrumcerebellum,
      title={CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning}, 
      author={Zeyi Sun and Yuhang Cao and Jianze Liang and Qiushi Sun and Ziyu Liu and Zhixiong Zhang and Yuhang Zang and Xiaoyi Dong and Kai Chen and Dahua Lin and Jiaqi Wang},
      year={2025},
      eprint={2508.20096},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2508.20096}, 
}


@misc{sun2025seagentselfevolvingcomputeruse,
      title={SEAgent: Self-Evolving Computer Use Agent with Autonomous Learning from Experience}, 
      author={Zeyi Sun and Ziyu Liu and Yuhang Zang and Yuhang Cao and Xiaoyi Dong and Tong Wu and Dahua Lin and Jiaqi Wang},
      year={2025},
      eprint={2508.04700},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2508.04700}, 
}

License

Code License Data License Usage and License Notices: The code is licensed under the Apache 2.0 License. The data is licensed for research use only under the Attribution-NonCommercial 4.0 International (CC-BY-NC-4.0) License. It should also abide by the policy of OpenAI: https://openai.com/policies/terms-of-use

Acknowledgement

We sincerely thank projects UI-TARS, ScienceBoard, R1-V, for providing their open-source resources.

Downloads last month
20
Safetensors
Model size
33.5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for OpenIXCLab/CODA-PLANNER-TARS-32B

Finetuned
(34)
this model

Dataset used to train OpenIXCLab/CODA-PLANNER-TARS-32B