Rate Limits on Chainnodes RPCs
Please make sure to stay within the limits.
Growth Plan users can contact their account manager to increase the limits.
Depending on your subscription plan, you get a monthly allowance of requests and an RPS ("requests per second") limitation. You can use those requests on all networks supported on Chainnodes.
Example: On the Developer plan you have 25 Million requests per month. You can use all 25M requests for one network (for example Ethereum). Or you can split the requests between chains (10M requests for Ethereum, 5M on Polygon and 10M on Gnosis).
On Chainnodes, 1 request always equals 1 request. No matter if you send a simple
eth_getBlockByNumber or a heavy archival trace.🤞
This makes us the perfect choice for your dApp, as there won't ever be unpredictability in both costs and RPS limitation. There are NO compute units or similar untransparent concepts at Chainnodes. You send 1 request, you pay for 1 request.
If you are on the Developer plan, your rate limit is 50 requests per second (RPS), you can send 50 requests every second, no matter which ones or on which networks.
In the context of subscriptions, 1 response = 1 request. This is because on subscriptions you don't send requests for every response.
This means that if you subscribe to
newPendingTransactions and receive 50 responses per second, that's 50 of your RPS limitation, and 50 per second
for your monthly allowance.
👉 Your rate limits apply to requests sent over Websocket or HTTP and are added together.
You can always check our Pricing Page (opens in a new tab) for up-to-date numbers on rate limits for all our plans.
- You are on the Developer Plan with 50 Request per second allowence.
- You can send requests to one or many networks, e.g. only Mainnet if you are only on Mainnet, or Polygon, Optimism, Arbitrum, and more if you are multi-chain.
- Requests are counted against all networks together. So you get a pool 50 Requests per second for all networks. You can use the networks as you wish.
- If you want higher rate limits, you can upgrade your plan.
Important: Our rate limits are strict. This means we expect you to obey to those limitations by scheduling your requests properly on your side. Should you go over the rate limit (RPS or monthly), your account will be blocked automatically for a while and unblocked automatically again. The more (and the more often) you go over your rate limits (especially RPS), the longer your ban will last.
RPS is typically counted on a minute average, which means if you have small bursts for a few seconds, you should be alright. We don't offer guarantees for this though. In certain circumstances, our infrastructure might decide to already ban you if you go over your RPS for 1 second. So please make sure to deal with it properly and stop sending requests when you are rate limited.
Because the Core Plan is free, we do not guarantee your rate limits. If you are rate limited even though you have not been above your allowance, the reason might be that we are prioritizing paid customers due to extened load, which might happen during Airdrops on certain networks, etc.
If you want a guarantee for your RPS and a monthly allowance, upgrade to a paid plan.
When you are rate limited, you will receive a
429 Too many requests HTTP status code.
On top of that, you will receive a normal JSON-RPC response that looks like the following:
"message": "per second rate limit exceeded",
"message": "per second rate limit exceeded",
If you are on Websocket, there is no status, only the message above.
backoff_seconds parameter tells you how long not to send requests (in seconds).
Please schedule your next request after at least
backoff_seconds seconds to prevent even longer bans.
Please do not create multiple accounts for the same project. Neither free plan, nor paid plan accounts. If you need more requests or RPS, consider upgrading to the next plan, or contact us on Telegram or via Email to discuss a custom plan if none of the ones advertised suit your needs.
Our system detects irregular activities like similar requests to similar contracts, same-source requests to different API keys, and more. If we detect that you use multiple accounts for the same project, either load balanced or switched our after some time, we will take down all affected accounts without prior notice.
There is a limit on the number of open Websocket connections.
If you are on the Core Plan (free plan), the limit is 1x your RPS, so currently 25.
On all paid plans, your open websocket connections limit is 100x your RPS, so on the Growth Plan it would be 100 x 500 = 50,000 open Websocket connections.
👉 This limit applies to any open websocket connection on your account. Even if it doesn't make any requests. Consider closing unused connections ASAP.
Unused connections will time out after some time. Please simply reconnect once you are ready to send a new request.
The open websocket limit is strict. This means that if you go over, we will start randomly closing some of your open websocket connections until you are below your limit again.
There is no limit on the number of subscriptions on your account. You can have multiple on one open Websocket connection, or distributed to multiple open Websocket connections. The responses you receive will play into your RPS and monthly allowance though, so if you open too many, you might get rate limited on all subscriptions or further requests.
If your account is rate limited (e.g. banned for a while) and you continue sending requests through a Websocket connection, we might close the Websocket connection abnormally (one-sided) and not let you open new ones until your ban is over.
There is a limit on
eth_getLogs. We use a block range limit to prevent spamming or wrong usage of the API call.
The block limit is currently
20,000 on all networks. 👉 This means the difference of
fromBlock needs to be
less than or equal to
If you need to request logs of a bigger range, split your request in
n subrequests with and iterate over them, combining the result array on your client.
If your block range is huge, you should consider instead deploying a TheGraph subgraph (indexer) on Chainnodes hosted Graph solution. Please contact us on Telegram (opens in a new tab) or via Email for more information.
☝️ Your request size (especially the body) always needs to be reasonably sized.
We don't have explicit limits, but we will drop both HTTP and Websocket requests if your body is too big. We currently only support typical RPC calls, so you shouldn't need to send us a huge request body. We might block you if you repeat spamming huge body sizes.
Responses are sometimes very large. This is especially true if you trace very big blocks (lots of gas used) and include multiple trace types.
It can also happen on some very large
eth_getLogs ranges where lots of events happened.
Chainnodes currently limits response sizes to 30MB for HTTP requests, and 100MB for Websocket requests.
This is enough for most cases we ever saw in the wild. If you actually manage to receive a rate limit because of response size limit, please reach out to us.
👉 Some things you can do:
- Split your
eth_getLogseven further, preventing huge response sizes.
- Split your block traces into individual transaction traces, or request individual trace types.
If those solutions don't work, let us know and we will look into your case.
Some traces take very long. And especially if you batch them together, the request might be pending for a while.
Our current time limits for responses to be fully received are:
- For HTTP request the limit is 30 seconds
- For Websocket requests the limit is 120 seconds
As with response size limitations, make sure to split up your
eth_getLogs even further or send individual traces to counter those limits.
Chainnodes limits batch calls to 100 requests per batch. Make sure to stay within all the other rate limits mentioned, even within one batch call.
- You send 100 requests in a batch, it counts as 100 requests against your monthly allowance AND your RPS limitation.
- If you send a batch of 100 requests and your RPS limitation is 50, either this or your next request might be rate limited. 🚫
- If you send a batch with 100
eth_getLogs, make sure the total block range requested is still not above the block range limit mentioned for
eth_getLogsrequests (see somewhere on this page)
- In certain circumstances we might allow a 2x block range if you split it into multiple requests in a batch. We do not guarantee this though, so best case stay below the official 1-request block range limit.
- Request/Response size limits and time limits are the same for batch calls. Not 100x the original limits.
Try to stay away from batch calls. They are not useful in most real world applications. The reason we support them is to fully support the JSON-RPC standard, but we don't recommend customers to use them. The reason is that batch calls can fail if one of your requests has issues, and you need to wait for all requests in the batch to terminate before receiving a response. On top of that, your rate limits are not higher with batches, and might be a little more unpredictable.
HTTP2to send requests individually, in which case you don't have the overhead of multiple handshakes.
Websocketto send requests individually.