EvoCUA: Evolving Computer Use Agent

πŸ₯‡ #1 Open-Source Model on OSWorld | A General-Purpose Multimodal Model Excelling at Computer Use

Model Model OSWorld Score License

English | δΈ­ζ–‡

OSWorld Leaderboard

πŸ₯‡ #1 Open-Source Model on OSWorld Leaderboard (Jan 2026)


πŸ“’ Updates

  • 2026.01.13: Released EvoCUA-8B-20260105 β€” achieves 46.1% on OSWorld, competitive with 72B-level models using fewer parameters! πŸ†•
  • 2026.01.05: Released EvoCUA-32B-20260105 with 56.7% on OSWorld, achieving #1 among open-source models πŸ₯‡

🌟 Highlights

  • πŸ₯‡ #1 Open-Source Model on OSWorld: Achieves 56.7% task completion rate, #1 among all open-source models
  • πŸ“ˆ Significant Improvements: +11.7% over OpenCUA-72B (45.0%β†’56.7%), +15.1% over Qwen3-VL thinking (41.6%β†’56.7%), with fewer parameters and half the steps
  • πŸ–₯️ End-to-End Multi-Turn Automation: Operates Chrome, Excel, PowerPoint, VSCode and more through screenshots and natural language instructions
  • 🧠 Novel Training Method: Our data synthesis and training approach consistently improves Computer Use capability across multiple open-source VLMs without degrading general performance

πŸ“Š Performance Comparison

Rank Model Open/Closed Type Max Steps Score
1 Claude-sonnet-4-5 πŸ”’ Closed General 100 62.9%
2 Seed-1.8 πŸ”’ Closed General 100 61.9%
3 Claude-sonnet-4-5 πŸ”’ Closed General 50 58.1%
4 EvoCUA-20260105 (Ours) 🟒 Open General 50 56.7% πŸ₯‡
5 DeepMiner-Mano-72B πŸ”’ Closed Specialized 100 53.9%
6 UI-TARS-2-2509 πŸ”’ Closed General 100 53.1%
7 EvoCUA (Previous Version) πŸ”’ Closed General 50 50.3%
8 EvoCUA-8B-20260105 (Ours) 🟒 Open General 50 46.1%
9 OpenCUA-72B 🟒 Open Specialized 100 45.0%
... ... ... ... ... ...
13 Qwen3-VL-Flash πŸ”’ Closed General 100 41.6%

EvoCUA is #1 among all open-source models, achieving competitive results with only 50 steps. Human-level performance remains significantly higher, indicating substantial room for improvement.


πŸš€ Quick Start

Installation

Python 3.12 is recommended.

git clone https://github.com/meituan/EvoCUA.git
cd EvoCUA
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Model Download & Deployment

EvoCUA requires downloading the model weights from HuggingFace and deploying with vLLM as an OpenAI-compatible inference server.

Recommended versions:

  • torch: 2.8.0+cu126
  • transformers: 4.57.3
  • vllm: 0.11.0
# 1) Download model weights
huggingface-cli download meituan/EvoCUA-32B-20260105 \
  --local-dir /path/to/EvoCUA-32B \
  --local-dir-use-symlinks False

# 2) Launch vLLM serving (recommend separate environment)
vllm serve /path/to/EvoCUA-32B \
  --served-model-name EvoCUA \
  --host 0.0.0.0 \
  --port 8080 \
  --tensor-parallel-size 2

# 3) Set environment variables
# Environment variables can be configured in .env file (see env.template for reference):
cp env.template .env
# Edit .env with your configurations, e.g.,
export OPENAI_API_KEY="dummy"
export OPENAI_BASE_URL="http://127.0.0.1:8080/v1"

Run Evaluation on OSWorld

python3 run_multienv_evocua.py \
  --headless \
  --provider_name aws \
  --observation_type screenshot \
  --model EvoCUA-S2 \
  --result_dir ./evocua_results \
  --test_all_meta_path evaluation_examples/test_nogdrive.json \
  --max_steps 50 \
  --num_envs 30 \
  --temperature 0.01 \
  --max_history_turns 4 \
  --coordinate_type relative \
  --resize_factor 32 \
  --prompt_style S2

πŸ“ Project Structure

EvoCUA/
β”œβ”€β”€ run_multienv_evocua.py      # Main entry point (multi-env parallel evaluation)
β”œβ”€β”€ lib_run_single.py           # Single task rollout logic (trajectory, screenshots, recording, scoring)
β”œβ”€β”€ lib_results_logger.py       # Real-time result aggregation to results.json
β”œβ”€β”€ desktop_env/                # OSWorld environment implementation
β”‚   β”œβ”€β”€ providers/              # VM providers (AWS/VMware/Docker/etc.)
β”‚   β”œβ”€β”€ controllers/            # Environment controllers
β”‚   └── evaluators/             # Task evaluators
β”œβ”€β”€ mm_agents/
β”‚   └── evocua/                 # EvoCUA agent (prompts, parsing, action generation)
└── evaluation_examples/        # OSWorld task configurations

πŸ“– About OSWorld

OSWorld is the most influential benchmark in the Computer Use Agent domain. It is adopted by leading AI organizations including OpenAI, Anthropic, ByteDance Seed, Moonshot AI, Zhipu AI, Step, and more. OSWorld evaluates agents' ability to complete real-world computer tasks through multi-turn interactions with actual desktop environments.


πŸ”— Resources


πŸ™ Acknowledgements

We sincerely thank the open-source community for their outstanding contributions to the Computer Use Agent field. We are grateful to Xinyuan Wang (OpenCUA) and Tianbao Xie (OSWorld) for their insightful discussions, valuable feedback on evaluation, and continuous support throughout this project. Their pioneering work has greatly inspired and advanced our research. We are committed to giving back to the community and will continue to open-source our research to advance the field.


πŸ“ Citation

If you find EvoCUA useful in your research, please consider citing:

@misc{evocua2026,
  title={EvoCUA: Evolving Computer Use Agent},
  author={Chong Peng* and Taofeng Xue*},
  year={2026},
  url={https://github.com/meituan/EvoCUA},
  note={* Equal contribution}
}

πŸ“œ License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.


Built with ❀️ by Meituan LongCat Team

Downloads last month
102
Safetensors
Model size
33B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for meituan/EvoCUA-32B-20260105

Quantizations
1 model

Collection including meituan/EvoCUA-32B-20260105