Logo OmniBench

A Scalable Multi-Dimensional Benchmark of Essential Virtual Agent Capabilities

1Zhejiang University, 2Ant Group, 3The Hong Kong University of Science and Technology, 4Nanjing University
*Equal Contribution, Corresponding Author
ICML 2025 Spotlight (top 2.6%)

Introduction

As multimodal large language models (MLLMs) advance, MLLM-based virtual agents have demonstrated remarkable performance. However, existing benchmarks face significant limitations, including uncontrollable task complexity, extensive manual annotation with limited scenarios, and a lack of multidimensional evaluation.

In response to these challenges, we introduce LogoOmniBench, a self-generating, graph-based benchmark with an automated pipeline for synthesizing tasks of controllable complexity through subtask composition.

To evaluate the diverse capabilities of virtual agents on the graph, we further present LogoOmniEval, a multidimensional evaluation framework that includes subtask-level evaluation, graph-based metrics, and comprehensive tests across 10 capabilities.

Our synthesized dataset contains 36k graph-structured tasks across 20 scenarios, achieving a 91% human acceptance rate. Training on our graph-structured data shows that it can more efficiently guide agents compared to manually annotated data. We conduct multidimensional evaluations for various open-source and closed-source models, revealing their performance across various capabilities and paving the way for future advancements.

LogoOmniBench

Overview

To cost-effectively construct diverse task scenarios with complexity at multiple granularities for comprehensive agent evaluation, we propose a novel self-generating, graph-based benchmark, LogoOmniBench. It dynamically synthesizes tasks with controllable complexity based on a bottom-up pipeline.

LogoOmniBench spans five fundamental types of task complexity to construct 10 evaluation dimensions (see the main figure). Test tasks across these dimensions are categorized based on combinations of complexity types. For example, a long-range planning test task typically exhibits higher dependency complexity and hierarchical complexity.

LogoOmniBench consists of 36k high-quality graph-structured tasks across 20 distinct scenarios (e.g. image editing, video editing) derived from its self-generating framework, with the task scale being 40x larger than most environment-based benchmarks, as shown in the comparison table.

algebraic reasoning
Overview of LogoOmniBench, a systematic benchmark with five-dimensional task complexity and bottom-up automatic task synthesis for generating structured task graphs. It evaluates ten virtual agent capabilities using high-quality graph-based data, ensuring scalable and realistic task assessments.

Comparison

arithmetic reasoning
Comparison of virtual agent benchmarks across environment, task, and evaluation dimensions. Unlike previous benchmarks, LogoOmniBench features automatic task composition, five-dimensional task complexity, and a 10-capability evaluation framework.

Statistics

arithmetic reasoning
The left illustrates the distribution of 49 apps and their corresponding categories in LogoOmniBench. The right shows the step distribution required to complete subtasks and full tasks.

Pipeline

We designed a bottom-up automated pipeline to synthesize tasks with controllable complexity. This pipeline consists of four processes:
(1) Subtask Discovery: First, we synthesize a series of simple subtask instructions from the explorable environment.
(2) Subtask Synthesis: Then, we iteratively synthesize subtask trajectories and evaluation functions.
(3) Task Composition: Next, the subtasks are combined into a task bottom-up.
(4) Task Validation: Finally, we validate the semantics of the tasks.

data-overview

Bottom-up task synthesis pipeline of LogoOmniBench.

Cases in LogoOmniBench

LogoOmniEval

Overview

We propose a graph-based multidimensional evaluation framework, LogoOmniEval. In contrast to previous coarse-grained evaluation methods, we introduce a graph-based evaluator that applies subtask-level evaluation functions in LogoOmniBench. Specifically, we design two novel fine-grained metrics to evaluate agents' performance on graph-structured tasks and their alignment with human logic. Based on LogoOmniBench, we comprehensively evaluate 12 virtual agents, including both open-source and proprietary models, across all 10 capability dimensions as shown in the main figure, fully revealing the capability boundaries and providing concrete directions for future improvement.

grade-lv

Comparison of mainstream virtual agent evaluation strategies with the evaluation strategy we propose.

Main Evaluation

arithmetic reasoning
Performance of models on LogoOmniBench. For each capability, we use the CR metric on test tasks for quantification. Abbreviations adopted: PP for Parallel Planning; LRP for Long Range Planning; CDDK for Cross-Domain Decision-Making; SDK for Sequential Decision-Making; SI for Subtask Identification; DI for Dependency Identification; LSR for Long Sequence Reasoning; LIF for Long Instruction Following; DSK for Domain-Specific Knowledge; CDK for Cross-Domain Knowledge. An asterisk (*) indicates that the agent uses GPT-4o as the planner.

Failure Analysis

arithmetic reasoning
The top illustrates the distribution of the five main failure causes. The bottom presents examples of these five failure causes.