Summary
When using anvil --fork-url with public or rate-limited RPC providers, heavy or complex workloads (e.g., automation scripts querying many distinct addresses/storage slots) frequently hit rate limits (HTTP 429), causing request timeouts and breaking downstream workflows.
Proposal: Allow --fork-url to accept multiple RPC endpoints and distribute requests across them using a load-balancing strategy (round-robin, least-latency, or weighted), with automatic failover when one endpoint returns 429 or times out.
Motivation
- Existing mitigations (
--retries, --retry-backoff, --compute-units-per-second) only slow down requests — they don't increase total throughput.
BlockchainDb disk cache helps with repeated queries, but cold starts or workloads touching many unique addresses/slots still generate a high volume of upstream RPC calls.
- Users often have access to multiple RPC providers (e.g., Alchemy + QuickNode + public endpoints). Distributing load across them would effectively multiply the available rate limit budget.
Proposed UX
# Multiple --fork-url flags
anvil --fork-url https://rpc1.example.com --fork-url https://rpc2.example.com --fork-url https://rpc3.example.com
# Or comma-separated
anvil --fork-url "https://rpc1.example.com,https://rpc2.example.com"
foundry.toml support
[rpc_endpoints]
mainnet = { endpoints = ["https://rpc1.example.com", "https://rpc2.example.com"], retries = 5, retry_backoff = 1000 }
Proposed Behavior
- Load balancing: Round-robin by default across healthy endpoints.
- Failover: If an endpoint returns 429 or times out, temporarily mark it unhealthy and route to the next available endpoint. Re-check after a cooldown period.
- Consistency: All endpoints must be queried at the same block number (already enforced by fork block pinning).
- Per-endpoint config: Each URL can optionally have its own
compute_units_per_second / retry settings.
Implementation Notes
The change is mostly localized to the provider layer:
- CLI parsing (
crates/anvil/src/cmd.rs): Accept multiple --fork-url values.
- Provider layer (
crates/common/src/provider/): Introduce a LoadBalancedProvider that wraps multiple RetryProvider instances and distributes requests.
- Fork config (
crates/anvil/src/config.rs): Pass the provider list through NodeConfig → ClientForkConfig.
- Transparent to upper layers:
ClientFork already uses Arc<RetryProvider> — replacing it with Arc<LoadBalancedProvider> (implementing the same provider trait) requires minimal changes upstream.
Alternatives Considered
- External load balancer / proxy (e.g., nginx, eRPC): Works but adds operational complexity. A native solution is more ergonomic.
- Higher
compute_units_per_second: Only delays the problem; doesn't increase actual throughput.
- Paid unlimited RPC plans: Expensive; multi-provider load balancing is more cost-effective.
Summary
When using
anvil --fork-urlwith public or rate-limited RPC providers, heavy or complex workloads (e.g., automation scripts querying many distinct addresses/storage slots) frequently hit rate limits (HTTP 429), causing request timeouts and breaking downstream workflows.Proposal: Allow
--fork-urlto accept multiple RPC endpoints and distribute requests across them using a load-balancing strategy (round-robin, least-latency, or weighted), with automatic failover when one endpoint returns 429 or times out.Motivation
--retries,--retry-backoff,--compute-units-per-second) only slow down requests — they don't increase total throughput.BlockchainDbdisk cache helps with repeated queries, but cold starts or workloads touching many unique addresses/slots still generate a high volume of upstream RPC calls.Proposed UX
foundry.toml support
Proposed Behavior
compute_units_per_second/ retry settings.Implementation Notes
The change is mostly localized to the provider layer:
crates/anvil/src/cmd.rs): Accept multiple--fork-urlvalues.crates/common/src/provider/): Introduce aLoadBalancedProviderthat wraps multipleRetryProviderinstances and distributes requests.crates/anvil/src/config.rs): Pass the provider list throughNodeConfig→ClientForkConfig.ClientForkalready usesArc<RetryProvider>— replacing it withArc<LoadBalancedProvider>(implementing the same provider trait) requires minimal changes upstream.Alternatives Considered
compute_units_per_second: Only delays the problem; doesn't increase actual throughput.