Skip to content

feat(anvil): support multiple fork URLs with load balancing #14265

@qa-august-l

Description

@qa-august-l

Summary

When using anvil --fork-url with public or rate-limited RPC providers, heavy or complex workloads (e.g., automation scripts querying many distinct addresses/storage slots) frequently hit rate limits (HTTP 429), causing request timeouts and breaking downstream workflows.

Proposal: Allow --fork-url to accept multiple RPC endpoints and distribute requests across them using a load-balancing strategy (round-robin, least-latency, or weighted), with automatic failover when one endpoint returns 429 or times out.

Motivation

  • Existing mitigations (--retries, --retry-backoff, --compute-units-per-second) only slow down requests — they don't increase total throughput.
  • BlockchainDb disk cache helps with repeated queries, but cold starts or workloads touching many unique addresses/slots still generate a high volume of upstream RPC calls.
  • Users often have access to multiple RPC providers (e.g., Alchemy + QuickNode + public endpoints). Distributing load across them would effectively multiply the available rate limit budget.

Proposed UX

# Multiple --fork-url flags
anvil --fork-url https://rpc1.example.com --fork-url https://rpc2.example.com --fork-url https://rpc3.example.com

# Or comma-separated
anvil --fork-url "https://rpc1.example.com,https://rpc2.example.com"

foundry.toml support

[rpc_endpoints]
mainnet = { endpoints = ["https://rpc1.example.com", "https://rpc2.example.com"], retries = 5, retry_backoff = 1000 }

Proposed Behavior

  1. Load balancing: Round-robin by default across healthy endpoints.
  2. Failover: If an endpoint returns 429 or times out, temporarily mark it unhealthy and route to the next available endpoint. Re-check after a cooldown period.
  3. Consistency: All endpoints must be queried at the same block number (already enforced by fork block pinning).
  4. Per-endpoint config: Each URL can optionally have its own compute_units_per_second / retry settings.

Implementation Notes

The change is mostly localized to the provider layer:

  • CLI parsing (crates/anvil/src/cmd.rs): Accept multiple --fork-url values.
  • Provider layer (crates/common/src/provider/): Introduce a LoadBalancedProvider that wraps multiple RetryProvider instances and distributes requests.
  • Fork config (crates/anvil/src/config.rs): Pass the provider list through NodeConfigClientForkConfig.
  • Transparent to upper layers: ClientFork already uses Arc<RetryProvider> — replacing it with Arc<LoadBalancedProvider> (implementing the same provider trait) requires minimal changes upstream.

Alternatives Considered

  • External load balancer / proxy (e.g., nginx, eRPC): Works but adds operational complexity. A native solution is more ergonomic.
  • Higher compute_units_per_second: Only delays the problem; doesn't increase actual throughput.
  • Paid unlimited RPC plans: Expensive; multi-provider load balancing is more cost-effective.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    Status

    Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions