Developers evaluating chunked upload solutions eventually land on the same question: should I use a library like Resumable.js that gives me full control over the HTTP layer, or adopt the TUS protocol and its ecosystem of standardized clients and servers? The answer depends on your infrastructure, your team's preferences, and how much of the upload pipeline you want to own.
This comparison is practical. Both approaches work in production. The question is which set of trade-offs fits your situation.
How Resumable.js works
Resumable.js operates at the application level with a straightforward model. The client uses the browser's File API to slice a file into chunks using File.slice(), then sends each chunk as a standard HTTP POST request to your server endpoint. Each request includes metadata — the file's unique identifier, the chunk number, total chunks, and total file size — as query parameters or form fields.
const r = new Resumable({
target: '/api/upload',
chunkSize: 2 * 1024 * 1024, // 2 MB
simultaneousUploads: 3,
testChunks: true
});
r.assignBrowse(document.getElementById('file-input'));
r.on('fileAdded', () => r.upload());
The server receives ordinary POST requests. There's no special protocol to implement — you parse the multipart body, read the chunk metadata, write the chunk to disk or storage, and reassemble when all chunks arrive. Your server code might be Express, Django, Rails, Go, PHP — anything that handles HTTP.
Before uploading, Resumable.js can send a GET request for each chunk (testChunks) to ask the server "do you already have this one?" If the server responds with 200, that chunk is skipped. This is how resume works across sessions: the client re-checks all chunks, skips the ones already received, and uploads only what's missing.
For the full set of knobs — chunkSize, maxChunkRetries, forceChunkSize, simultaneousUploads, and more — see the configuration reference.
How TUS works
TUS (an acronym for "the upload server," though the project uses it as a name) is an open protocol for resumable uploads. It defines a specific set of HTTP interactions between client and server.
The flow:
-
Creation. The client sends a POST to the upload endpoint with
Upload-LengthandUpload-Metadataheaders. The server creates an upload resource and returns its URL in theLocationheader. -
Uploading. The client sends PATCH requests to that URL with
Upload-OffsetandContent-Type: application/offset+octet-stream. Data is appended sequentially. -
Resuming. If interrupted, the client sends HEAD to the upload URL. The server responds with the current
Upload-Offset. The client resumes from that byte.
import * as tus from 'tus-js-client';
const upload = new tus.Upload(file, {
endpoint: 'https://files.example.com/uploads/',
chunkSize: 2 * 1024 * 1024,
retryDelays: [0, 1000, 3000, 5000],
onProgress: (loaded, total) => {
console.log(`${((loaded / total) * 100).toFixed(1)}%`);
}
});
upload.start();
The protocol is versioned (currently 1.0.0) and has optional extensions for creation-with-upload, concatenation, expiration, and checksum verification. Server implementations include tusd (Go, the reference server), tus-node-server (Node.js), and libraries for Ruby, Python, PHP, Java, and .NET.
For context on how TUS relates to the emerging IETF standard, see The New HTTP Resumable Upload Standard.
Comparison table
| Dimension | Resumable.js | TUS Protocol |
|---|---|---|
| Protocol | No formal protocol — HTTP POST with conventions | Versioned specification (1.0.0) |
| Data transfer method | POST multipart form data per chunk | PATCH with byte offset |
| Chunk upload order | Parallel (configurable) | Sequential (offset-based) |
| Resume mechanism | GET per chunk (testChunks) | HEAD for current offset |
| Server requirement | Any HTTP server | TUS-compliant server/middleware |
| Client libraries | Resumable.js (browser) | tus-js-client, tus-ios, tus-android, tus-java, etc. |
| Server libraries | Write your own handler | tusd, tus-node-server, tus-ruby-server, etc. |
| Chunk identity | Client-generated (file metadata hash) | Server-assigned upload URL |
| Parallel uploads | Yes, simultaneousUploads config | Not in core spec (concatenation extension) |
| Progress granularity | Per-chunk and per-file | Per-file (byte offset) |
| Upload metadata | Custom query params / form fields | Upload-Metadata header |
| Ecosystem maturity | Stable library, community maintained | Active protocol + reference implementations |
| Interoperability | Coupled to your server implementation | Any TUS client ↔ any TUS server |
Trade-offs that matter in practice
Server control vs. protocol compliance
With Resumable.js, your server is just an HTTP endpoint. You decide how to store chunks, where to write them, how to name them, and when to reassemble. There's no protocol handshake to implement correctly. The flip side: every server you build is bespoke. If you have upload endpoints in three different services written by three different teams, each one implements its own chunk handling. See server receiver patterns for common approaches.
TUS gives you protocol interoperability. A TUS client on iOS can upload to the same tusd server as a TUS client in the browser. If you're building a platform where multiple client types need to upload — mobile apps, web apps, CLI tools, third-party integrations — TUS means you implement the server once and any TUS-compliant client works. The cost is that your server must conform to the protocol. You can't easily add custom authentication flows, non-standard metadata, or routing logic without working within (or around) the TUS spec.
Parallel chunks vs. sequential offsets
Resumable.js uploads chunks in parallel by default. With simultaneousUploads: 3, three chunks fly concurrently. For large files on high-bandwidth connections, this saturates the pipe faster than sequential uploading. It also means if chunk 47 fails, the client retries chunk 47 while chunks 48 and 49 continue.
TUS is inherently sequential. The client sends bytes starting at offset 0, then from wherever the server confirms, and so on. The concatenation extension allows parallel upload of separate "partial" uploads that the server joins afterward, but it's an optional extension not all servers support, and it adds complexity.
For details on how chunk sizing and parallelism interact, see the chunking guide.
Resume semantics
Both approaches support resume, but the mechanics differ.
Resumable.js resume is chunk-granular. On reconnect, the client checks each chunk individually via GET requests. If you uploaded 95 of 100 chunks before the connection dropped, the client makes 100 GET requests (fast, small responses), finds that 95 return 200, and uploads the remaining 5. This works across browser sessions if the file identifier is deterministic.
TUS resume is byte-offset based. One HEAD request tells the client exactly where to resume. That's fewer round-trips, but if the connection dropped mid-PATCH, you might re-send some bytes that the server already received and discarded (because they arrived after the last confirmed offset). In practice, the overhead is small.
Infrastructure expectations
TUS infrastructure means running (or hosting) a TUS-compliant server. tusd is the most common — a standalone Go binary that handles the protocol and can forward completed uploads to S3, GCS, or your application via hooks. You can also use tus-node-server as Express middleware. Either way, you're adding a component to your stack that understands TUS semantics.
Resumable.js infrastructure is whatever you already have. If your backend is Express, you add a route handler. If it's Django, you add a view. If it's a serverless function, you write the chunk logic there. The API methods reference documents the client-side interface; the server side is entirely your code.
Error handling and retries
Resumable.js has built-in retry logic per chunk. Set maxChunkRetries and chunkRetryInterval and failed chunks automatically retry without restarting the entire upload. The client emits granular events — fileProgress, chunkingComplete, error — that let you build detailed UI feedback.
TUS clients (like tus-js-client) also support retry with configurable delays. The retry is at the upload level: on failure, the client queries the offset and resumes. Both approaches are reliable, but Resumable.js gives you more granular control over what happens when individual pieces fail.
When TUS fits
Choose TUS when:
- Multiple client platforms need to upload to the same backend. TUS has mature clients for iOS, Android, Java, Python, and browsers. One server handles all of them.
- You want a turnkey server.
tusdhandles the protocol, stores uploads, and notifies your app via hooks. Less custom code. - Protocol standardization matters for your organization — compliance, documentation, or API contracts with third parties.
- You don't need parallel chunk uploads or can live with the concatenation extension's complexity.
When Resumable.js fits
Choose Resumable.js when:
- Your server is already built and you want to add chunked uploads without adopting new middleware or server components.
- You need parallel chunk uploads for performance on large files.
- Custom server logic is essential — non-standard auth flows, chunk routing to different storage backends, custom metadata handling.
- You want minimal dependencies. Resumable.js is a single client-side library. The server side is your code, your way.
- Browser-only uploads are your use case. You don't need iOS/Android protocol interoperability.
They're not mutually exclusive
Some teams use TUS for their public API (where third-party clients need standardized upload behavior) and Resumable.js for their own web app (where they want full control and parallel uploads). The storage backend can be the same — both approaches ultimately write bytes to disk or object storage.
Pick the one that fits your current architecture. You can always migrate later, because at the end of the day, both are just sending bytes over HTTP.
