Graph showing the relationship between chunk size and upload throughput with optimal range highlighted

Choosing Optimal Chunk Sizes for Resumable Uploads

Learn how to choose the right chunk size for resumable uploads — balancing throughput, retry cost, memory usage, and server constraints for real-world upload pipelines.

Guides·Updated 2026-04-04

Chunk size looks like a simple configuration number — set chunkSize in your Resumable.js config and move on. In practice, it's a dial between competing pressures: throughput, retry cost, memory footprint, and server-side constraints. Get it wrong in either direction and you'll see slower uploads, wasted bandwidth on retries, or outright failures when chunks exceed server limits.

This guide breaks down the trade-offs, gives you concrete starting points for different network conditions, and shows how to implement adaptive sizing that adjusts chunk size based on real-time conditions.

The Trade-Off Spectrum

Every chunked upload request carries fixed overhead: TCP connection setup, TLS handshake (if HTTPS), HTTP headers, multipart boundary parsing, and server-side processing per request. This overhead is roughly constant regardless of chunk size. The question is how much useful payload you amortize it across.

Too small (e.g., 256 KB chunks for a 1 GB file): You're making ~4,000 HTTP requests. Each one pays the fixed overhead cost. Connection reuse (HTTP keep-alive) helps, but you still pay for header parsing, server-side routing, chunk storage I/O, and response handling on every request. The upload spends more time in overhead than in actual data transfer. You'll also see higher CPU usage on the server from request handling.

Too large (e.g., 100 MB chunks for a 1 GB file): Only 10 requests, minimal overhead ratio. But if a chunk fails at 98% completion, you retransmit nearly 100 MB. On unreliable connections, this can mean the upload never finishes — each attempt at the large chunk fails before completing, and you're stuck in a retry loop. Large chunks also increase memory pressure in the browser during FormData construction and on the server during reception.

The sweet spot depends on your specific combination of network conditions, server infrastructure, and file sizes.

Throughput Math

Let's put rough numbers on the overhead. A typical HTTPS request to a cloud endpoint involves:

  • TLS handshake: ~50–100 ms for a new connection (amortized to near zero with connection reuse)
  • HTTP headers + multipart framing: ~1–2 KB per request
  • Server-side processing per chunk: varies, but 5–50 ms for disk write + response generation
  • Round-trip latency: 20–200 ms depending on geography

For a connection reusing an existing TLS session, the per-request overhead is roughly 30–100 ms and 2 KB of header data. At 1 MB chunk sizes, that's 30–100 ms overhead per 1 MB transferred. At 10 MB chunks, it's 30–100 ms per 10 MB — a 10x improvement in overhead ratio.

On a 100 Mbps connection, uploading 1 MB takes about 80 ms of pure transfer time. So the overhead (30–100 ms) is significant relative to the transfer — you're spending almost as much time on ceremony as on actual data transfer. At 10 MB chunks, the transfer takes ~800 ms and the overhead becomes a rounding error.

This math shifts dramatically on slower connections. On a 5 Mbps mobile connection, 1 MB takes ~1.6 seconds to transfer. The 30–100 ms overhead is negligible. Here, smaller chunks are fine from a throughput perspective, and they give you the retry advantage.

Failure Cost

This is where chunk size has the most dramatic impact. The cost of a failed chunk is proportional to the chunk size multiplied by how far it got before failing.

Chunk SizeFailure at 98%Data Retransmitted
1 MB0.98 MB wasted1 MB retransmit
5 MB4.9 MB wasted5 MB retransmit
10 MB9.8 MB wasted10 MB retransmit
50 MB49 MB wasted50 MB retransmit

On an unstable connection where, say, 5% of requests fail, the expected wasted bandwidth scales linearly with chunk size. With 1 MB chunks, you waste roughly 50 KB per failure (average 50% completion × 1 MB × 5% failure rate). With 50 MB chunks, that's 1.25 MB per failure on average. Over a 10 GB upload, the difference between 1 MB and 50 MB chunks could mean gigabytes of extra bandwidth spent on retransmission.

The retries and resume guide covers how Resumable.js handles these failures and the retry strategies available.

Memory and Browser Considerations

File.slice() in the browser is lazy — it creates a Blob reference without reading the data into memory. But when Resumable.js constructs the FormData for the upload request, the browser must read the slice into memory. For a 10 MB chunk, that's 10 MB of memory per in-flight request. With simultaneousUploads: 3, that's 30 MB of memory committed to upload buffers.

This matters on mobile devices with limited memory, and it matters when your users might have other memory-intensive tabs open. For most desktop scenarios, 50–100 MB of upload buffer memory is fine. For mobile web apps or PWAs, keeping it under 15–20 MB total (3 concurrent × 5 MB chunks) is more prudent.

The browser also enforces limits on individual XHR/fetch request sizes, though these are generous (typically 2 GB+). The practical limit is usually on the server side.

Server-Side Constraints

Your chunk size must respect the request body limits configured at every layer of your server infrastructure:

  • nginx: client_max_body_size defaults to 1 MB. If your chunks are larger, you'll get 413 errors. This is the most common gotcha.
  • Apache: LimitRequestBody defaults to unlimited, but many hosting environments set it lower.
  • AWS API Gateway: 10 MB hard limit on payload size. Non-negotiable. If you're uploading through API Gateway, chunks must be under 10 MB.
  • Cloudflare Workers: 100 MB on the paid plan, but the free tier is much lower.
  • AWS S3 multipart: Minimum part size of 5 MB (except for the last part). If you're uploading chunks directly to S3, you need at least 5 MB.

The cloud storage comparison details the specific limits for S3, R2, and B2 in the context of large uploads.

Your chunk size must be smaller than the smallest limit in your infrastructure chain. If traffic passes through nginx (1 MB default) → API Gateway (10 MB limit) → S3 (5 MB minimum), your chunk size must be ≥ 5 MB (for S3) and ≤ the nginx configured limit. You'll need to raise nginx's client_max_body_size to at least match your chunk size.

Practical Starting Points

These ranges work as sensible defaults before you have real measurement data:

Network ConditionRecommended Chunk SizeRationale
Mobile / unstable (< 10 Mbps)1–2 MBLow retry cost, fits in memory
Broadband (10–100 Mbps)5–10 MBGood throughput/retry balance
LAN / datacenter (100+ Mbps)10–20 MBMinimize request overhead
Known reliable + large files20–50 MBMax throughput, acceptable retry cost

For most web applications serving a general audience, 5 MB is a reasonable starting default. It clears S3's minimum part size, fits comfortably under most server limits, keeps retry cost manageable, and the throughput overhead is acceptable on connections above 10 Mbps.

const r = new Resumable({
  target: '/api/upload',
  chunkSize: 5 * 1024 * 1024, // 5 MB — a solid default
  simultaneousUploads: 3,
});

Adaptive Chunk Sizing

Rather than picking a static size, you can start with smaller chunks and increase the size if uploads are succeeding consistently. This gives you low retry cost during instability and better throughput when the connection is solid.

let currentChunkSize = 2 * 1024 * 1024; // Start at 2 MB
let consecutiveSuccesses = 0;
const MAX_CHUNK_SIZE = 20 * 1024 * 1024;

const r = new Resumable({
  target: '/api/upload',
  chunkSize: currentChunkSize,
});

r.on('fileSuccess', () => {
  consecutiveSuccesses++;
  if (consecutiveSuccesses >= 5 && currentChunkSize < MAX_CHUNK_SIZE) {
    currentChunkSize = Math.min(currentChunkSize * 2, MAX_CHUNK_SIZE);
    consecutiveSuccesses = 0;
    // Note: chunkSize change applies to new files, not in-progress ones
  }
});

r.on('fileError', () => {
  consecutiveSuccesses = 0;
  if (currentChunkSize > 2 * 1024 * 1024) {
    currentChunkSize = Math.max(currentChunkSize / 2, 2 * 1024 * 1024);
  }
});

A key caveat: Resumable.js determines chunk boundaries when a file is added, based on the chunkSize at that moment. Changing chunkSize on the Resumable instance mid-upload doesn't re-slice in-progress files. The adaptive sizing applies to files added after the change. For true per-chunk adaptivity, you'd need to operate at a lower level than Resumable's built-in chunking — which is rarely worth the complexity.

Measuring Your Specific Pipeline

Theory gets you to a starting point. Measurement gets you to the right answer. The variables that matter are specific to your infrastructure: your CDN's request overhead, your server's chunk processing time, your users' actual network conditions.

A practical measurement approach:

  1. Pick 3–4 chunk sizes spanning your expected range (e.g., 1 MB, 5 MB, 10 MB, 20 MB).
  2. Upload a representative file size (say, 500 MB) at each chunk size.
  3. Measure total wall-clock time, not just transfer speed. Include chunk verification, server processing, and the final merge step.
  4. Run the test from at least two network conditions: your office (good) and a throttled connection simulating mobile (poor).
  5. Compare. The right chunk size is the one that minimizes total upload time across your expected range of network conditions, while staying within your server constraints.

The timeouts guide covers how to set appropriate timeout values for your chosen chunk size — a 20 MB chunk on a slow connection needs a much longer timeout than a 1 MB chunk. Getting the chunk size right and the timeout wrong still results in failures.

The chunking fundamentals guide covers how Resumable.js splits files and reassembles them on the server side. Start there if you need to understand the mechanics before tuning the numbers.