Papers
arxiv:2508.14704

MCP-Universe: Benchmarking Large Language Models with Real-World Model Context Protocol Servers

Published on Aug 20
ยท Submitted by Ziyang on Aug 21
Authors:
,
,
,
,
,
,
,
,

Abstract

MCP-Universe is a comprehensive benchmark designed to evaluate large language models in realistic tasks through interaction with real-world MCP servers, addressing challenges like long-horizon reasoning and unfamiliar tool spaces.

AI-generated summary

The Model Context Protocol has emerged as a transformative standard for connecting large language models to external data sources and tools, rapidly gaining adoption across major AI providers and development platforms. However, existing benchmarks are overly simplistic and fail to capture real application challenges such as long-horizon reasoning and large, unfamiliar tool spaces. To address this critical gap, we introduce MCP-Universe, the first comprehensive benchmark specifically designed to evaluate LLMs in realistic and hard tasks through interaction with real-world MCP servers. Our benchmark encompasses 6 core domains spanning 11 different MCP servers: Location Navigation, Repository Management, Financial Analysis, 3D Design, Browser Automation, and Web Searching. To ensure rigorous evaluation, we implement execution-based evaluators, including format evaluators for agent format compliance, static evaluators for time-invariant content matching, and dynamic evaluators that automatically retrieve real-time ground truth for temporally sensitive tasks. Through extensive evaluation of leading LLMs, we find that even SOTA models such as GPT-5 (43.72%), Grok-4 (33.33%) and Claude-4.0-Sonnet (29.44%) exhibit significant performance limitations. In addition, our benchmark poses a significant long-context challenge for LLM agents, as the number of input tokens increases rapidly with the number of interaction steps. Moreover, it introduces an unknown-tools challenge, as LLM agents often lack familiarity with the precise usage of the MCP servers. Notably, enterprise-level agents like Cursor cannot achieve better performance than standard ReAct frameworks. Beyond evaluation, we open-source our extensible evaluation framework with UI support, enabling researchers and practitioners to seamlessly integrate new agents and MCP servers while fostering innovation in the rapidly evolving MCP ecosystem.

Community

Paper author Paper submitter
โ€ข
edited about 3 hours ago

๐Ÿš€ MCP-Universe: Real-World AI Agent Evaluation Framework
๐Ÿ‘‹ Excited to share our latest work on evaluating AI agents in real-world scenarios:

๐Ÿ“„ Paper: https://arxiv.org/abs/2508.14704
๐Ÿ”— GitHub: https://github.com/SalesforceAIResearch/MCP-Universe
๐ŸŒ Website: https://mcp-universe.github.io/
๐Ÿ’ฌ Discord: https://discord.gg/7k8YMFJnjn

What makes this special?

โœ… No synthetic benchmarks, actual MCP server interactions
โœ… Multi-domain coverage, 3D design (Blender), browser automation, financial analysis, location navigation, repository management, web search
โœ… Complex multi-step tasks that require planning and action execution
โœ… Dynamic ground truth, not static datasets

๐Ÿ“Š Results

Even the best models struggle with real-world tasks:

  • GPT-5: 43.72% success rate
  • Grok-4: 33.33% success rate
  • Claude-4.0-Sonnet: 29.44% success rate

This shows there's still a huge gap between current capabilities and real-world agent performance!

๐Ÿ”ง For Researchers & Developers

The framework provides:

  • Custom benchmark creation tools
  • Agent orchestration system
  • Detailed evaluation reports
  • Multi-server integration support

Perfect for anyone working on tool-using agents, multi-step reasoning, or real-world AI applications. Would love to hear your thoughts and see how the community uses this! ๐Ÿค–โœจ

ยท

Hi, which version of GPT-5 are you using? Would be great to clarify that

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Paper author Paper submitter

Many have asked which modes of GPT-5 and other models we test in MCP-Universe. Weโ€™ve now added more details to our leaderboard (https://mcp-universe.github.io/). Weโ€™ll keep updating our website as more LLMs are tested, like DeepSeek-3.1. Feedback is always welcome!

ยท

Thank you! I was wondering which provider was used for the GPT-OSS-120B model? Cause on a related paper (https://huggingface.co/papers/2508.20453), GPT-OSS scores really well on tool calling. The authors used OpenRouter as seen here.

Summary of MCP-Universe Paper

This paper introduces MCP-Universe, a comprehensive benchmark for evaluating large language models (LLMs) in realistic tasks through interaction with real-world Model Context Protocol (MCP) servers.

Key Highlights:

  • Addresses critical gaps in existing benchmarks that are overly simplistic
  • Encompasses 6 core domains across 11 different MCP servers: Location Navigation, Repository Management, Financial Analysis, 3D Design, Browser Automation, and Web Searching
  • Uses execution-based evaluators including format, static, and dynamic evaluators
  • Even SOTA models show significant limitations: GPT-5 (43.72%), Grok-4 (33.33%), Claude-4.0-Sonnet (29.44%)
  • Introduces long-context and unknown-tools challenges for LLM agents
  • Open-sources an extensible evaluation framework with UI support

Impact: This benchmark reveals substantial performance gaps between current AI capabilities and real-world agent requirements, providing a crucial tool for advancing LLM agent development.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.14704 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.14704 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.14704 in a Space README.md to link it from this page.

Collections including this paper 12