"Multipart" Means Three Different Things
The word "multipart" causes more confusion in upload architecture discussions than any other term. Depending on context, it can refer to three completely different mechanisms. Mixing them up leads to real architectural mistakes — building the wrong pipeline, choosing the wrong library, or creating unnecessary complexity.
Here are the three meanings, clearly separated:
- HTTP
multipart/form-data— a single HTTP request containing form fields and file data, the browser's native file upload mechanism - Client-side chunked uploads — splitting a file into pieces with
File.slice()and sending each as a separate HTTP request (what Resumable.js does) - Storage multipart APIs — server-side APIs for uploading large objects in parts (S3 Multipart Upload, R2 Multipart, B2 Large File)
They can be used independently or together. Understanding when each applies is essential to building the right upload pipeline.
HTTP multipart/form-data: The Browser's Built-In
When you submit an HTML form with <input type="file">, the browser sends a multipart/form-data request. This is a single HTTP request where the body contains multiple "parts" separated by a boundary string — form fields and file content packed together.
POST /upload HTTP/1.1
Content-Type: multipart/form-data; boundary=----FormBoundary123
------FormBoundary123
Content-Disposition: form-data; name="username"
alice
------FormBoundary123
Content-Disposition: form-data; name="file"; filename="photo.jpg"
Content-Type: image/jpeg
[binary file data]
------FormBoundary123--
This is simple and universally supported. Every web framework parses it natively. It's the right choice when:
- Files are small (under 10–50 MB)
- You don't need progress tracking beyond what
XMLHttpRequestprovides - Resume after failure isn't needed
- The upload is part of a form submission with other fields
The limitation is that the entire file must be sent in one request. If the connection drops at 95%, the entire upload restarts. There's no way to resume, no granular progress per chunk, and the server must buffer the entire file (or stream it carefully) before processing.
Client-Side Chunked Uploads: What Resumable.js Does
Resumable.js takes a different approach. Instead of sending the file in one request, it slices the file into chunks using the File.slice() API and sends each chunk as a separate HTTP request. The server receives individual chunks and assembles them into the original file.
// What Resumable.js does internally (simplified)
const file = inputElement.files[0];
const chunkSize = 2 * 1024 * 1024; // 2 MB
const totalChunks = Math.ceil(file.size / chunkSize);
for (let i = 0; i < totalChunks; i++) {
const start = i * chunkSize;
const end = Math.min(start + chunkSize, file.size);
const chunk = file.slice(start, end);
// Each chunk is sent as a separate request
// with metadata: chunk number, total chunks, file identifier
await uploadChunk(chunk, {
resumableChunkNumber: i + 1,
resumableTotalChunks: totalChunks,
resumableIdentifier: fileId,
});
}
Each chunk request typically uses multipart/form-data as its content type — so yes, Resumable.js uses HTTP multipart/form-data as the transport for each chunk. The chunking happens at a higher level.
This approach enables:
- Resumability — skip chunks the server already has via test requests
- Parallel uploads — send multiple chunks simultaneously via simultaneousUploads
- Granular progress — track completion per-chunk for accurate progress bars
- Large file support — files of any size, limited only by storage capacity
- Retry at chunk level — a failed chunk retries independently, not the whole file
The chunking guide covers the mechanics in depth.
Storage Multipart APIs: The Server-Side Pattern
Cloud storage providers have their own concept of "multipart" that operates server-side. Amazon S3 Multipart Upload, Cloudflare R2 Multipart, and Backblaze B2 Large File APIs all follow the same pattern:
- Initiate — tell the storage service you're starting a large upload, get an upload ID
- Upload parts — send numbered parts (minimum 5 MB for S3), each returns an ETag
- Complete — send a manifest of all part numbers and ETags, the service assembles the object
// AWS SDK: S3 multipart upload (server-side)
const { S3Client, CreateMultipartUploadCommand,
UploadPartCommand, CompleteMultipartUploadCommand } = require('@aws-sdk/client-s3');
const s3 = new S3Client({ region: 'us-east-1' });
// 1. Initiate
const { UploadId } = await s3.send(new CreateMultipartUploadCommand({
Bucket: 'my-bucket',
Key: 'videos/large-file.mp4',
}));
// 2. Upload part (repeated for each part)
const { ETag } = await s3.send(new UploadPartCommand({
Bucket: 'my-bucket',
Key: 'videos/large-file.mp4',
UploadId,
PartNumber: 1,
Body: chunkBuffer,
}));
// 3. Complete
await s3.send(new CompleteMultipartUploadCommand({
Bucket: 'my-bucket',
Key: 'videos/large-file.mp4',
UploadId,
MultipartUpload: { Parts: [{ PartNumber: 1, ETag }] },
}));
This is a server-to-storage mechanism. The browser doesn't interact with S3 multipart APIs directly (unless you use presigned URLs, which has its own complexity). The S3 and object storage guide covers the integration patterns.
The cloud storage comparison evaluates how S3, R2, and B2 handle multipart uploads differently — minimum part sizes, concurrency limits, and completion semantics vary across providers.
How They Relate: The Combined Pipeline
The most common production architecture uses all three:
Browser Server Storage
────── ────── ───────
File.slice() → chunk 1 ──POST──→ receive chunk 1 ──────────→
File.slice() → chunk 2 ──POST──→ receive chunk 2 ──────────→ S3 Multipart
File.slice() → chunk 3 ──POST──→ receive chunk 3 ──────────→ Upload Parts
... ...
detect all chunks received
──→ Complete Multipart Upload
Resumable.js handles the client-side chunking. Your server receives chunks and either buffers them to disk or streams them directly as S3 multipart parts. When all chunks arrive, the server completes the storage-side multipart upload.
The AWS Lambda + S3 example implements exactly this pattern: Resumable.js chunks → Lambda endpoint → S3 multipart upload.
The key insight: client-side chunk boundaries don't have to align with storage part boundaries. You might receive 2 MB Resumable.js chunks but buffer them into 10 MB S3 parts (since S3 requires minimum 5 MB parts except for the last one). Or you might map them 1:1 if your chunk size meets the storage provider's minimum.
When Each Approach Shines
Use basic multipart/form-data when:
- File size is predictably small (< 10 MB)
- Upload is part of a traditional form submission
- You don't need resume, progress, or parallel upload
- Simplicity matters more than resilience
Use client-side chunked uploads (Resumable.js) when:
- Files can be large or unpredictable in size
- Users are on unreliable connections
- You need resume capability after interruption
- Granular progress tracking matters
- You need to upload files larger than server request size limits
- Parallel upload would improve throughput
The parallel vs resumable guide goes deeper on how these capabilities interact.
Use storage multipart APIs when:
- Your server needs to write large objects to cloud storage
- You want the storage provider to handle part assembly
- Server-to-server transfer efficiency matters
- You need the storage provider's built-in integrity checks (ETags)
Combine client chunking + storage multipart when:
- Users upload large files that end up in object storage
- You want end-to-end resilience: client → server → storage
- You need to minimize local disk usage on the server (stream chunks to storage parts)
Common Mistakes
Trying to call S3 multipart APIs directly from the browser. S3 multipart requires authentication. You'd need presigned URLs for each part, which means your server still handles initiation and completion. At that point, you're building your own chunked upload protocol. Resumable.js already solved this.
Using chunked uploads for small files. If every file is under 5 MB, the overhead of chunking — multiple requests, chunk management, reassembly — adds complexity without benefit. A single multipart/form-data POST is simpler and faster.
Confusing chunk size with S3 part size. S3 requires parts to be at least 5 MB (except the last). If your Resumable.js chunkSize is 1 MB, you can't map chunks 1:1 to S3 parts. Buffer multiple chunks into parts, or increase your chunk size. The optimal chunk sizes guide discusses sizing strategies.
Ignoring the reassembly step. Client-side chunking means your server must reassemble chunks into the original file. This isn't automatic — you need to track which chunks have arrived, detect completion, and concatenate them in order. The server receivers guide provides implementation patterns.
Decision Guide
| Factor | multipart/form-data | Client Chunked | Storage Multipart |
|---|---|---|---|
| Resume after failure | ✗ | ✓ | ✓ (server-side) |
| Progress per chunk | ✗ | ✓ | N/A |
| Parallel transfer | ✗ | ✓ | ✓ |
| Browser-native | ✓ | Needs library | N/A |
| Max file size | Server limit | Unlimited | Provider limit |
| Complexity | Low | Medium | Medium |
| Typical use | Forms | User uploads | Server → storage |
Most applications that handle file uploads of any significant size end up using Resumable.js (or similar) for client-side chunking, combined with storage multipart APIs on the backend. The browser's native multipart/form-data handles everything else — profile photos, avatars, config files, anything small and non-critical.
Pick the simplest approach that meets your reliability requirements. Add complexity only when the use case demands it.
