Skip to main content

Tutorial overview

Efficiently fetch real-time prices for up to 10 tokens in a single API request using the DexPaprika multi-asset endpoint. This guide shows you why batching matters and how to implement it effectively.
The multi-asset endpoint is optimized for efficiency. Learn why batching is critical for sustainable API usage and how to structure your requests.

Why Batch Requests? The Math Behind It

Single Request vs. Batching

Let’s say you need to monitor prices for 50 tokens on Ethereum. Scenario 1: Individual Requests (❌ Inefficient)
50 tokens × 1 request per token = 50 API calls
Scenario 2: Using Multi-Asset Endpoint (✅ Efficient)
50 tokens ÷ 10 tokens per batch = 5 API calls
Result: You reduce API calls by 90%

Real-World Impact

Use CaseTokens to MonitorIndividual RequestsBatched RequestsReduction
Portfolio tracker20 tokens20 calls2 calls90%
DeFi aggregator100 tokens100 calls10 calls90%
Trading bot (30s updates)50 tokens/day~2,880 calls/day~288 calls/day90%
Real-time dashboard35 tokens35 calls4 calls88.6%
The Key Insight: Using the multi-asset endpoint instead of individual token calls reduces your request count by up to 90%. This means you can monitor more tokens, update more frequently, and stay well within rate limits.

Understanding Rate Limits

The DexPaprika API is public with no formal limits currently enforced. Best practices like batching ensure sustainable usage as the service matures.
By batching requests:
  • Reduce server load — Fewer HTTP connections
  • Improve latency — Single network round-trip for multiple tokens
  • Future-proof your app — When rate limits are introduced, you’re already optimized
  • Scale effortlessly — Monitor hundreds of tokens without performance degradation

Step 1: Understanding the Endpoint

The multi-asset pricing endpoint accepts:
  • Network (path parameter): e.g., ethereum, solana, base
  • Tokens (query parameter): Comma-separated list of up to 10 token addresses
  • Constraint: Maximum 10 tokens per request, 2000 character limit
GET /networks/{network}/multi/prices?tokens={token1},{token2},...,{token10}

Step 2: Simple Batch Request

Fetch prices for 3 tokens on Ethereum in a single request:
curl -X GET "https://api.dexpaprika.com/networks/ethereum/multi/prices?tokens=0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2,0xdac17f958d2ee523a2206206994597c13d831ec7,0x6b175474e89094c44da98b954eedeac495271d0f" | jq
Response:
Response
[
  {
    "id": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
    "chain": "ethereum",
    "price_usd": 4400.57
  },
  {
    "id": "0xdac17f958d2ee523a2206206994597c13d831ec7",
    "chain": "ethereum",
    "price_usd": 1.0003
  },
  {
    "id": "0x6b175474e89094c44da98b954eedeac495271d0f",
    "chain": "ethereum",
    "price_usd": 1.0001
  }
]
Duplicate input addresses may produce duplicate entries in the response. Dedupe client-side if required.

Step 3: Batching Larger Token Lists in Node.js

If you need to monitor more than 10 tokens, split them into chunks:
const https = require('https');

const tokenAddresses = [
  "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2", // WETH
  "0xdac17f958d2ee523a2206206994597c13d831ec7", // USDT
  "0x6b175474e89094c44da98b954eedeac495271d0f", // DAI
  "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48", // USDC
  "0x2260fac5e5542a773aa44fbcff9fafb9a190d659", // WBTC
  "0x7fc66500c84a76ad7e9c93437e434122a1b63cad", // AAVE
  "0x1f9840a85d5af5bf1d1762f925bdaddc4201f984", // UNI
  "0x6982508145454ce325ddbe47a25d4ec3d2311933", // PEPE
  "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb", // bbb (example)
  "0xcccccccccccccccccccccccccccccccccccccccc", // ccc (example)
  "0xdddddddddddddddddddddddddddddddddddddddd", // ddd (example)
  "0xeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee", // eee (example)
];

// Split into chunks of 10
function chunkArray(arr, size) {
  const chunks = [];
  for (let i = 0; i < arr.length; i += size) {
    chunks.push(arr.slice(i, i + size));
  }
  return chunks;
}

async function fetchPricesInBatches(network, tokens) {
  const chunks = chunkArray(tokens, 10);
  const allPrices = [];

  for (const chunk of chunks) {
    const tokensParam = chunk.join(',');
    const url = `https://api.dexpaprika.com/networks/${network}/multi/prices?tokens=${tokensParam}`;

    try {
      const prices = await new Promise((resolve, reject) => {
        https.get(url, (res) => {
          let data = '';
          res.on('data', (chunk) => { data += chunk; });
          res.on('end', () => {
            resolve(JSON.parse(data));
          });
        }).on('error', reject);
      });

      allPrices.push(...prices);
      console.log(`Fetched batch: ${chunk.length} tokens`);
    } catch (error) {
      console.error(`Error fetching batch:`, error);
    }
  }

  return allPrices;
}

// Usage
(async () => {
  const prices = await fetchPricesInBatches('ethereum', tokenAddresses);
  console.log(`\nTotal tokens with prices: ${prices.length}`);
  prices.forEach(token => {
    console.log(`${token.id}: $${token.price_usd}`);
  });
})();

Axios variant (Node.js)

import axios from 'axios';

async function fetchBatch(network, addresses) {
  const tokensParam = addresses.join(',');
  const url = `https://api.dexpaprika.com/networks/${network}/multi/prices`;
  const { data } = await axios.get(url, { params: { tokens: tokensParam }, timeout: 5000 });
  return data; // [{ id, chain, price_usd }, ...]
}
Key Observation: 12 tokens require only 2 API calls instead of 12! This is an 83% reduction in network requests.

Step 4: Python Implementation with Real-Time Monitoring

Build a simple price monitor that updates every 30 seconds:
import requests
import time
from typing import List, Dict

class TokenPriceMonitor:
    def __init__(self, network: str = 'ethereum'):
        self.network = network
        self.base_url = f"https://api.dexpaprika.com/networks/{network}/multi/prices"
        self.batch_size = 10
        self.prices_cache = {}

    def chunk_tokens(self, tokens: List[str]) -> List[List[str]]:
        """Split tokens into batches of max 10"""
        return [tokens[i:i + self.batch_size] for i in range(0, len(tokens), self.batch_size)]

    def fetch_prices(self, tokens: List[str]) -> Dict[str, float]:
        """Fetch prices for multiple tokens in batches"""
        batches = self.chunk_tokens(tokens)
        results = {}

        for batch in batches:
            tokens_param = ','.join(batch)

            try:
                response = requests.get(
                    self.base_url,
                    params={'tokens': tokens_param},
                    timeout=5
                )
                response.raise_for_status()

                data = response.json()
                for token in data:
                    results[token['id']] = token['price_usd']

            except requests.RequestException as e:
                print(f"Error fetching batch: {e}")
                # Optional: basic backoff before next batch to avoid tight retry loops
                time.sleep(1)

        return results

    def monitor_prices(self, tokens: List[str], interval: int = 30):
        """Continuously monitor token prices"""
        print(f"Monitoring {len(tokens)} tokens on {self.network}")
        print(f"Total API calls per update: {len(self.chunk_tokens(tokens))}")

        try:
            while True:
                prices = self.fetch_prices(tokens)

                print(f"\n[{time.strftime('%Y-%m-%d %H:%M:%S')}] Updated {len(prices)} tokens")
                for token, price in list(prices.items())[:5]:
                    print(f"  {token}: ${price:.6f}")

                if len(prices) > 5:
                    print(f"  ... and {len(prices) - 5} more tokens")

                time.sleep(interval)

        except KeyboardInterrupt:
            print("\nMonitoring stopped.")

# Example usage
if __name__ == "__main__":
    monitor = TokenPriceMonitor(network='ethereum')

    tokens_to_watch = [
        "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",  # WETH
        "0xdac17f958d2ee523a2206206994597c13d831ec7",  # USDT
        "0x6b175474e89094c44da98b954eedeac495271d0f",  # DAI
        "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48",  # USDC
        "0x2260fac5e5542a773aa44fbcff9fafb9a190d659",  # WBTC
    ]

    monitor.monitor_prices(tokens_to_watch)

Step 5: Error Handling & Edge Cases

Handling Invalid or Unpriced Tokens

Some tokens may not have prices. The endpoint returns only valid tokens:
# Request with some invalid tokens
curl -X GET "https://api.dexpaprika.com/networks/ethereum/multi/prices?tokens=0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2,0xdeaddeaddeaddeaddeaddeaddeaddeaddaddadd" | jq
Response (only the priced token is returned):
[
  {
    "id": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
    "chain": "ethereum",
    "price_usd": 4400.57
  }
]

Edge-case requests

# 0 tokens (expect 400)
curl -s -o /dev/stderr -w "%{http_code}\n" "https://api.dexpaprika.com/networks/ethereum/multi/prices"

# 11 tokens (expect 400)
curl -s -o /dev/stderr -w "%{http_code}\n" "https://api.dexpaprika.com/networks/ethereum/multi/prices?tokens=0x1,0x2,0x3,0x4,0x5,0x6,0x7,0x8,0x9,0xA,0xB"
Best Practice: Always validate the response. Check that the number of returned tokens matches your expectations. Handle missing tokens gracefully in your application.

Step 6: Comparison: Single vs. Batch Requests

Request 100 tokens for a dashboard

❌ Without Batching:
for token in "${token_addresses[@]}"; do
  curl -X GET "https://api.dexpaprika.com/networks/ethereum/tokens/$token"
done
# Result: 100 API calls
✅ With Batching:
# Call 1
curl -X GET "https://api.dexpaprika.com/networks/ethereum/multi/prices?tokens=token1,token2,...,token10"

# Call 2
curl -X GET "https://api.dexpaprika.com/networks/ethereum/multi/prices?tokens=token11,token12,...,token20"

# ... Continue for remaining batches ...
# Result: 10 API calls
Impact:
  • 90% fewer requests
  • 10x faster execution (single network round-trip per batch vs. 100 individual calls) ✅
  • Lower latency for your users ✅
  • Future-proof against rate limits ✅

Performance Metrics

Benchmark: Fetching prices for 50 tokens on Ethereum
MetricSingle RequestsBatched (5 calls)
API Calls505
Total Network Round-trips505
Estimated Time (assuming 100ms/call)5,000ms500ms
Bandwidth Used~50KB~5KB
Server LoadHighLow
Result: Batching achieves 10x faster responses while reducing server load and bandwidth by 90%.

Next Steps


Best Practices Summary

  1. Always batch — Use the multi-asset endpoint instead of individual calls
  2. Chunk large lists — Split more than 10 tokens into multiple batches (max 10 per call)
  3. Validate responses — Not all requested tokens may have prices; handle missing data
  4. Cache results — Store prices and update on a schedule instead of on every request
  5. Handle errors gracefully — Network timeouts and invalid tokens are normal; retry with backoff
  6. Monitor performance — Track API call counts to measure your optimization improvements

Get Support

FAQs

10 tokens per request. Requests with more than 10 tokens will return HTTP 400.
Duplicate inputs may yield duplicate entries in the response. Dedupe client-side if needed.
Tokens without prices are silently omitted from the response. Only priced tokens are returned.
No. The response order is not guaranteed to match your input order. Use the id field to identify each token.
No. Each batch must be for a single network. Use separate calls for different networks.
It depends on your use case. For dashboards, every 30-60 seconds is common. For trading bots, every 1-5 seconds. Batching enables frequent updates without excessive API calls.
Yes. The single token endpoint (GET /networks/{network}/tokens/{token_address}) provides detailed metadata like liquidity, volume, and holdings. Use it for deep analysis; use multi-asset for just prices.
No. At least one valid token is required. The endpoint returns HTTP 400 if the tokens list is empty.