Supported Subgraph Chains

See the supported chains and networks for Alchemy Subgraphs.

Alchemy Subgraphs supports the following network values for subgraph data sources.

Enterprise customers on dedicated infrastructure can enable any of the networks below by reaching out to our team.

General Availability

  • mainnet(Ethereum Mainnet)
  • goerli (Ethereum Goerli)
  • sepolia (Ethereum Sepolia)
  • matic (Polygon)
  • polygon-amoy (Polygon Amoy)
  • mumbai (Polygon Mumbai)
  • polygon-zkevm (Polygon zkEVM) *
  • arbitrum-one (Arbitrum)
  • arbitrum-goerli (Arbitrum Goerli)
  • arbitrum-sepolia (Arbitrum Sepolia)
  • optimism (Optimism)
  • optimism-goerli (Optimism Goerli)
  • optimism-sepolia (Optimism Sepolia)
  • base (Base)
  • base-testnet (Base Testnet)
  • base-sepolia (Base Sepolia)

* Please note that the Polygon zkEVM chain is still in its Mainnet Beta and subgraphs may experience occasional indexing lag behind head. Please reach out to our team with any concerns!

Enterprise Only

📘

Please contact our team if you'd like to enable one of these chains.

  • bsc (Binance Smart Chain)
  • chapel (BSC Chapel)
  • ronin (Ronin)
  • saigon (Saigon / Ronin Testnet)
  • xdai (Gnosis Chain / xDai)
  • arbitrum-nova (Arbitrum Nova)
  • avalanche (Avalanche)
  • fuji (Avalanche Fuji)
  • fantom (Fantom)
  • evmos (Evmos)
  • evmos-testnet (Evmos Testnet)
  • andromeda (Metis Andromeda)
  • moonriver (Moonriver)
  • moonbeam (Moonbeam)
  • mbase (Moonbase)
  • holesky (Ethereum Holesky)

Custom JSON RPC Endpoint Requirements

Enterprise plans can bring their own JSON RPC endpoints to index subgraphs on networks Alchemy doesn't natively support. For indexing to perform well (low block lag to head of the chain, fast initial indexing, etc), the endpoint must meet certain requirements.

Methods

Subgraphs indexing needs access to the following JSON RPC methods:

  • eth_getBlockByNumber
  • eth_getBlockByHash
  • eth_getTransactionReceipt (batch requests)
  • eth_getLogs
    • Must support a fromBlock and toBlock block range of at least 2000.
  • eth_chainId
  • web3_clientVersion
  • net_version
  • eth_call
  • (Optional, to support subgraphs that have call handlers) trace_filter

Archive data

Subgraphs indexing may make eth_calls at blocks as early as the genesis block.

Throughput / rate limits

The throughput needed on the endpoint depends on the block production time, number of transactions in each block, and initial indexing activity. We can work with you to test if an endpoint has sufficient throughput and recommend starting with an endpoint that supports at least 500 requests / second.

At a minimum, subgraph indexing continuously makes JSON RPC calls to get each block that’s produced and all the transactions in the block. As subgraphs are indexing, they will also consume additional throughput making calls to get contract storage and events.

Response times

If JSON RPC response time is too high, subgraphs may see slow initial indexing or lag in keeping up with chain tip.

At a minimum, to keep up with the chain, response time must be low enough to receive a response for an eth_getBlockBy* call and eth_getTransactionReceipt calls for every block and transaction being produced in under the block production time.

We can work with you to test this, but the rule of thumb is:

Br: eth_getBlockBy* response time
Tr: eth_getTransactionReceipt response time
N: p99 number of transactions in a block
B: block production time

Br + Tr * N < B

Data consistency

Subgraph indexing requires data consistency across JSON RPC calls. These consistency issues are typically caused by naive load balancing across nodes, such as a round robin strategy. If a node provider is hosting the endpoint, they’ve likely solved this and provide consistent data.

  • After calling eth_getBlockBy*, subsequent eth_getTransactionReceipt calls for each transaction in the block must succeed.
  • Block height must be monotonically increasing across requests.
  • After getting block and transaction data, subsequent eth_calls for that block must succeed.

We can work with you to test the endpoint to ensure meets these requirements.

ReadMe