8000 GitHub - showlab/macosworld
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

showlab/macosworld

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

macOSWorld: A Multilingual Interactive Benchmark for GUI Agents

Pei Yang* , Hai Ci* , and Mike Zheng Shou
Show Lab, National University of Singapore

arXiv  


⭐ Features

  • Interactive macOS environment: 30 macOS-native apps with their exclusive user interfaces
  • Multilingual benchmarking: Tasks and environments available in English, Chinese, Arabic, Japanese and Russian
  • Safety evaluation: Dedicated subset for benchmarking agents' resilience under context deception attacks

🚀 Getting Started

macOSWorld consists of a local Python testbench script and cloud-hosted AWS macOS instances. The benchmark process involves four main steps:


Step 1: Local Environment Configuration

1.1. Base Environment Setup

conda create -n macosworld python=3.9.21
conda activate macosworld
pip install vncdotool==1.2.0 boto3==1.36.20 sshtunnel httpx[socks] openai anthropic google-auth google_cloud_aiplatform jupyter

1.2. Model-Specific Configurations

For GPT-4o with SoM Annotations:

  1. Install additional dependencies:
conda activate macosworld
pip install timm easyocr paddlepaddle paddleocr einops==0.8.0 supervision==0.18.0 ultralytics==8.3.70
  1. Download model weights:
mkdir OmniParser
cd OmniParser
git clone https://huggingface.co/microsoft/OmniParser-v2.0
mv OmniParser-v2.0 weights
mv weights/icon_caption weights/icon_caption_florence

For Gemini Models:

macOSWorld uses the VertexAI API. Follow this guide to set up credentials and obtain a JSON credential file. Set the environment variable:

export GOOGLE_APPLICATION_CREDENTIALS='/path/to/gen-lang-client-xxx.json'

For ShowUI:

conda activate macosworld
pip install torch==2.6.0 qwen-vl-utils transformers==4.47.0 accelerate==0.26.0

For UI-TARS:

  1. Install dependencies:
conda create -n uitars python=3.9.21
pip install -U transformers==4.49.0  
pip install vllm==0.6.6 --extra-index-url https://download.pytorch.org/whl/cu124
  1. Start the vLLM service on the same device running the testbench:
NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1,2,3 vllm serve "bytedance-research/UI-TARS-7B-DPO" --served-model-name UI-TARS-7B-DPO --limit-mm-per-prompt image=3 -tp 4

Note: The tp parameter specifies the number of GPUs to use and must be a common divisor of 28 and 16.


Step 2: AWS Environment Configuration

macOSWorld requires an AWS-hosted cloud instance. Follow the detailed setup instructions in our AWS Configuration Guide.


Step 3: Running the Benchmark

3.1. Execute Benchmark

Configure your environment variables and run the testbench. Replace the 🧩 placeholders with your actual values:

# AWS Configuration
export AWS_ACCESS_KEY_ID=🧩'AKIAIOSFODNN7EXAMPLE'
export AWS_SECRET_ACCESS_KEY=🧩'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
export AWS_DEFAULT_REGION='ap-southeast-1'

# API Keys (configure as needed)
export OPENAI_API_KEY=🧩'sk-proj-...'                    # For OpenAI models
export ANTHROPIC_API_KEY=🧩'sk-ant-...'                  # For Anthropic models
export GOOGLE_APPLICATION_CREDENTIALS=🧩'/path/to/gen-lang-client-xxx.json'  # For Gemini models

# Set credential permission
chmod 400 credential.pem

# Run the benchmark
python testbench.py \
    --instance_id 🧩i-0d5f51a1d2bc1edb0 \
    --ssh_host 🧩ec2-13-250-104-211.ap-southeast-1.compute.amazonaws.com \
    --ssh_pkey 🧩my_credential.pem \
    --gui_agent_name 🧩gpt-4o-2024-08-06 \
    --paths_to_eval_tasks 🧩./tasks/sys_apps ./tasks/sys_and_interface ./tasks/productivity ./tasks/media ./tasks/file_management ./tasks/advanced ./tasks/multi_apps ./tasks/safety \
    --languages 🧩task_en_env_en task_zh_env_zh task_ar_env_ar task_ja_env_ja task_ru_env_ru \
    --base_save_dir ./results/gpt_4o \
    --max-steps 15 \
    --snapshot_recovery_timeout_seconds 1200 \
    --task_step_timeout 120
Parameter Description
instance_id EC2 instance ID
ssh_host DNS name or IP address of the EC2 instance
ssh_pkey Local file path to your SSH credential .pem file
gui_agent_name GUI agent identifier (see supported models below)
paths_to_eval_tasks Directory paths containing evaluation task JSON files
languages Language pairs in format task_<lang>_env_<lang>
base_save_dir Local directory for storing evaluation results
max_steps Maximum dialogue turns per task
snapshot_recovery_timeout_seconds Timeout for snapshot recovery (usually doesn't need adjustment)

Supported GUI Agents

  1. OpenAI GPT Series:

    • gpt-4o, gpt-4o-2024-08-06
    • With SoM: gpt-4o/omniparser, gpt-4o-2024-08-06/omniparser
    • Computer Use: openai/computer-use-preview
  2. Google Gemini Series:

    • gemini-1.5-pro-002, gemini-2.5-pro-preview-03-25
  3. Anthropic Claude:

    • claude-3-7-sonnet-20250219/computer-use-2025-01-24
  4. Open Source Models:

    • UI-TARS-7B-DPO
    • showlab/ShowUI-2B

Supported Language Codes: English (en), Chinese (zh), Arabic (ar), Japanese (ja), Russian (ru)

3.2. Handle Interruptions

If the benchmark is interrupted or fails to complete all tasks:

python cleanup.py --base_save_dir /path/to/base_save_dir

Clean up the base_save_dir before rerunning. Previously completed tasks will not be re-executed.

3.3. Aggregate Results

Use the provided Jupyter notebook to compile results:

scripts/aggregate_results.ipynb

Step 4: Releasing AWS Resources

After completing the benchmark, follow our AWS Cleanup Guide to terminate cloud resources and avoid unnecessary charges.


🤖 Benchmarking Other Agents

To benchmark other agents, follow these two steps:

  1. Create agent/your_custom_agent.py — either modify based on an existing agent or start with agent/template_for_custom_agent.py.
  2. Register your agent in agent/get_gui_agent.py.

⚡ Performance Optimization

Each task evaluation takes approximately 15-20 minutes, with snapshot recovery being the primary bottleneck. Here are two approaches to reduce recovery time:

Option 1: Manual Environment Recovery

  1. Bypass Snapshot Recovery: Add --override_env_reset parameter to testbench.py when evaluating similar tasks
  2. Manual Setup: Launch and connect to the environment via VNC before starting the benchmark
  3. Manual Recovery: When the testbench displays (pdb), manually restore the environment to its original state, then type c to continue

Option 2: Community Implementations

Consider using community VMware-based implementations that may offer performance improvements.


🚧 Other Implementations


🤝 Acknowledgements

The authors thank Kevin Qinghong Lin, Zhiqiang Chen, Noorbakht Khan, Brandon Ng, Mingyu Ouyang, Siyuan Hu, Xiangwu Guo, Henry Hengyuan Zhao, Difei Gao, Christopher Rawles, and Kun Shao for their valuable discussions and feedback.


📄 Citation

@article{macosworld,
    title={macOSWorld: A Multilingual Interactive Benchmark for GUI Agents}, 
    author={Pei Yang and Hai Ci and Mike Zheng Shou},
    year={2025},
    eprint={2506.04135},
    archivePrefix={arXiv},
    primaryClass={cs.AI},
    url={https://arxiv.org/abs/2506.04135}, 
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0