Skip to main content

Rate limits

TierLimitNotes
Free API10,000 requests per dayNo API key needed
Pro APIUnlimitedRequires API key and enterprise subscription
The daily limit resets on a rolling 24-hour window. There is no per-second or per-minute throttle — only the daily cap. When you exceed the limit, requests return HTTP 429.
Need unlimited access? The Pro API removes all rate limits. Contact support@coinpaprika.com.

Pagination rules

All paginated endpoints follow the same rules:
RuleValue
First pagepage=1 (1-indexed)
Page 0 behaviorSilently treated as page 1
Max items per page100 (via limit parameter)
Default items per pageVaries by endpoint (typically 10 or 50)
Transaction pagesMax 100 pages; use cursor for deeper history

Reduce API calls

Use batch pricing

Instead of making one request per token:
GET /networks/ethereum/tokens/0xc02a.../  → 1 request
GET /networks/ethereum/tokens/0xa0b8.../  → 1 request
GET /networks/ethereum/tokens/0x6b17.../  → 1 request
= 3 requests
Use batch pricing for up to 10 tokens at once:
GET /networks/ethereum/multi/prices?tokens=0xc02a...,0xa0b8...,0x6b17...
= 1 request
That’s a 3x reduction — or up to 10x when you need prices for 10 tokens.

Use streaming for real-time data

If you need live prices, don’t poll the REST API in a loop. Open one streaming connection instead:
# Polling (bad): 60 requests/minute for 1 token
while true; do curl ...; sleep 1; done

# Streaming (good): 1 connection, unlimited updates
curl -N "https://streaming.dexpaprika.com/stream?method=t_p&chain=ethereum&address=0xc02a..."
Streaming supports up to 2,000 tokens on a single connection.

Cache static data

Some data changes rarely and can be cached:
DataCache for
Network list (/networks)24 hours
DEX list (/networks/{n}/dexes)1 hour
Token metadata (name, symbol, decimals)24 hours
Pool token pair info24 hours
OHLCV historical data (completed candles)Forever (completed candles don’t change)
Data that changes frequently and should be fetched fresh:
  • Token prices
  • Pool volumes and transaction counts
  • Recent transactions
  • Current OHLCV candle (incomplete)

Request only what you need

  • Use limit to control page size — don’t fetch 100 items if you need 5
  • Use order_by and sort to get the most relevant results first
  • Use the filter endpoint for targeted queries instead of fetching all pools and filtering client-side

Handle errors gracefully

Implement exponential backoff

When requests fail, don’t retry immediately in a tight loop:
import time
import requests

def fetch_with_backoff(url, max_retries=3):
    for attempt in range(max_retries):
        response = requests.get(url)
        if response.status_code == 200:
            return response.json()
        if response.status_code == 429:
            wait = 2 ** attempt  # 1s, 2s, 4s
            time.sleep(wait)
            continue
        if response.status_code >= 500:
            wait = 2 ** attempt
            time.sleep(wait)
            continue
        # 400, 404, 410 -- don't retry, fix the request
        response.raise_for_status()
    raise Exception("Max retries exceeded")

Don’t retry client errors

  • 400 — fix the request parameters
  • 404 — verify the network ID and addresses
  • 410 — use the replacement endpoint
Only retry on 429 (rate limit) and 500 (server error).

Streaming best practices

Validate before streaming

The streaming API rejects the entire request if any asset is invalid. Always verify tokens exist via REST first:
# Validate each token before streaming
for token in tokens:
    resp = requests.get(f"https://api.dexpaprika.com/networks/{token['chain']}/tokens/{token['address']}")
    if resp.status_code != 200:
        tokens.remove(token)  # Remove invalid tokens

Use batched POST for multiple tokens

Instead of opening multiple GET connections:
# Bad: 5 connections for 5 tokens
GET /stream?method=t_p&chain=ethereum&address=0xaaa...
GET /stream?method=t_p&chain=ethereum&address=0xbbb...
GET /stream?method=t_p&chain=ethereum&address=0xccc...
Use one POST connection:
# Good: 1 connection for up to 2,000 tokens
POST /stream
[
  {"chain": "ethereum", "address": "0xaaa...", "method": "t_p"},
  {"chain": "ethereum", "address": "0xbbb...", "method": "t_p"},
  {"chain": "ethereum", "address": "0xccc...", "method": "t_p"}
]

Reconnect with backoff

Streaming connections can drop (network issues, server restarts). Always implement auto-reconnection:
import time

def stream_with_reconnect(url, payload, max_backoff=60):
    backoff = 1
    while True:
        try:
            resp = requests.post(url, json=payload, stream=True,
                headers={"Accept": "text/event-stream"})
            if resp.status_code != 200:
                raise Exception(f"HTTP {resp.status_code}")
            backoff = 1  # Reset on successful connection
            for line in resp.iter_lines():
                if line and line.startswith(b'data:'):
                    process_event(line)
        except Exception as e:
            print(f"Disconnected: {e}. Reconnecting in {backoff}s...")
            time.sleep(backoff)
            backoff = min(backoff * 2, max_backoff)

Parse prices as decimals

The streaming p field is a string, not a number. Use decimal parsing to avoid floating-point precision issues:
from decimal import Decimal
price = Decimal(data['p'])  # Not float(data['p'])

Production checklist

  • Cache network and DEX lists
  • Use batch pricing where possible
  • Use streaming instead of polling for live prices
  • Implement exponential backoff for retries
  • Handle all HTTP status codes (200, 400, 404, 410, 429, 500)
  • Parse streaming prices as decimals, not floats
  • Validate tokens before adding to streaming connections
  • Monitor daily request count against the 10,000 limit
  • Consider Pro API if approaching limits

FAQs

No. Only a daily cap of 10,000 requests. You can burst as fast as you want within that daily budget.
Opening a streaming connection counts as a request. The individual SSE events within the stream do not count.
The free tier is fixed at 10,000/day. For higher limits, use the Pro API.