Remote Browser Benchmark

Remote Browser Benchmark

Nov 5, 2025

Nov 5, 2025

Nov 5, 2025

Nov 5, 2025

/

San Francisco

/

JunHyoung Ryu

JunHyoung Ryu

JunHyoung Ryu

JunHyoung Ryu

TL;DR

  • Our minimal “hello, browser” flow (create → connect → visit google.com → release) averages 1.11s on Steel with a p95 of 1.37s (AWS EC2, us-east-1).

  • Versus other providers we tested, Steel’s end-to-end time is 2.0×–6.3× faster on average, and 2.2×–7.3× faster at p95.

  • The control-plane tax (create + release) dominates total time on most platforms. On Steel it’s ~0.23s (≈21% of total), elsewhere it ranges 0.51s → 5.63s (21% → 81%).

  • For AI agents (e.g. browser-use), the create/connect/release flow happens a lot, so shaving seconds here compounds across long runs and many workers.

Why this benchmark matters?

Agents don’t just “do work” on pages; they start and stop browsers over and over: new identities, clean slates, anti-bot resets, concurrency fan-out.

If you save ~1s per loop and your agent runs 20 loops, you’ve just won ~20s on a single task. Multiply that across hundreds of workers and thousands of tasks/day and you either delight users or grow a queue.

So we asked a simple question: How fast is “Hello, browser?”, and how much of the time is control plane (session creation/release) vs data plane (connect + navigate)?

What we measured

Provider-agnostic, minimal lifecycle with each vendor’s SDK:

  • Create a session (cold start / scheduling / identity)

  • Connect the driver (CDP handshake)

  • Navigate to google.com (waits for domcontentloaded)

  • Release the session (cleanup, quotas, accounting)

Environment: AWS EC2 us-east-1
Runs: ≈ 1k per provider

Steel runner (TypeScript, abbreviated):

const session = await client.sessions.create();
const browser = await chromium.connectOverCDP(`${session.websocketUrl}&apiKey=${STEEL_API_KEY}`);
await page.goto(url, { waitUntil: "domcontentloaded" });
await client.sessions.release(session.id);

We used Playwright over CDP for all vendors and mirrored the same four steps with each provider’s SDK. We did not supply any provider‑specific configuration, sessions ran on each vendor’s default settings

Results at a glance

Total time (create → connect → goto → release)

Provider

Avg (ms)

Median (ms)

p95 (ms)

p99 (ms)

STEEL

1,107.9

1,040.0

1,371.1

2,223.1

KERNEL

2,246.8

2,036.0

3,780.1

4,719.3

BROWSERBASE

2,363.2

2,164.0

3,054.3

7,276.4

HYPERBROWSER

3,691.7

3,308.0

6,154.5

7,472.0

ANCHORBROWSER

6,931.1

6,576.5

9,939.8

11,620.1

Steel speedup (avg / p95):

  • vs Kernel: 2.03× / 2.76×

  • vs Browserbase: 2.13× / 2.23×

  • vs Hyperbrowser: 3.33× / 4.49×

  • vs AnchorBrowser: 6.26× / 7.25×

Time saved for your agents (per 1,000 sessions):

  • ~19 min vs Kernel, ~21 min vs Browserbase, ~43 min vs Hyperbrowser, ~97 min vs AnchorBrowser

Mind the tails. Some platforms show p99 spikes in the multi‑second range, the outliers users actually feel during bursts and autoscaling.

Where the time goes (stage breakdown)

Provider

Create

Connect

Goto

Release

Create+Release

% of Total

STEEL

160.5

289.2

588.4

69.9

230.3 ms

20.8%

KERNEL

559.7

420.4

526.0

740.4

1,300.2 ms

57.9%

BROWSERBASE

337.0

1,151.4

703.1

171.7

508.7 ms

21.5%

HYPERBROWSER

2,380.8

507.7

412.5

390.7

2,771.5 ms

75.1%

ANCHORBROWSER

3,031.2

219.9

1,082.1

2,597.8

5,629.1 ms

81.2%

  • On Steel, the control plane is ~0.23s (~21% of total), roughly a rounding error for most loops.

  • On other platforms, the control plane is the workload (58–81% on several providers).

  • Data‑plane costs differ too: some platforms spend more in connect or first navigation. But for agents that frequently create/release, the control‑plane gap dominates wall‑clock.




Caveats

  • This is a cold lifecycle for a single navigation. Real agents do auth, multi‑step forms, file uploads, and more.

  • Results reflect AWS EC2 us‑east‑1 and google.com as the first page. Region, instance class, network and page choice influence numbers.

Production guidance for agent builders

  • Amortize the handshake. Batch multiple subtasks per session when safe; reuse sessions for longer flows.

  • Simplify the surface. Mobile Mode yields smaller DOMs and higher click accuracy for vision agents.

  • Watch the tails. Track p95/p99 per stage in your traces — optimize for outliers, not just averages.

  • Prefer headful when flaky. Headful Sessions improve compatibility with sites that resist headless automation.

  • Debug faster. Agent Logs give you action‑by‑action traces aligned to live view and MP4 replay.

Build with Steel

Humans use Chrome. Agents use Steel.
A headless browser API built for AI products: fast session starts, persistent profiles, stealth & CAPTCHA, live viewers + MP4 replays, and scaling without the DevOps tax.

Appendix: Raw tables used


Ready to

Build with Steel?

Ready to

Build with Steel?

Ready to

Build with Steel?

Ready to Build with Steel?

A better way to take your LLMs online.

© Steel · Inc. 2025.

All Systems Operational

Platform

Join the community