For years, resumable uploads on the web have been a solved problem at the library level but an unsolved one at the protocol level. Every chunked upload implementation — Resumable.js included — invents its own conventions for splitting files, tracking offsets, and reassembling on the server. That works. It works well. But there's never been an HTTP-native way to say "I'm resuming an upload at byte 4,194,304" and have every proxy, CDN, and server along the path understand what that means.
The IETF draft for resumable uploads over HTTP aims to change that. This article breaks down the protocol mechanics, compares it to existing approaches, and explains what it means for developers using Resumable.js today.
What the draft standard defines
The resumable upload draft (draft-ietf-httpbis-resumable-upload) introduces a set of HTTP semantics that let a client upload a resource incrementally, resume after interruption, and query the server for the current offset — all using standard HTTP methods and headers. No proprietary chunking logic. No custom query parameters.
The core idea: an upload becomes a first-class HTTP resource with its own URL, and the client sends data to that resource using PATCH with explicit byte offsets.
Upload resource creation
The flow starts with the client sending a POST request to the upload endpoint. This request includes two key headers:
POST /upload HTTP/1.1
Host: files.example.com
Upload-Draft-Interop-Version: 7
Upload-Complete: ?0
Content-Length: 0
Content-Type: application/octet-stream
Upload-Metadata: filename dGVzdC5tcDQ=,filetype dmlkZW8vbXA0
The Upload-Complete: ?0 structured field tells the server this isn't the full file — more data will follow. The server responds with a 104 Upload Resumption Supported informational response (or a 201 Created), and critically, a Location header pointing to the newly created upload resource:
HTTP/1.1 104 Upload Resumption Supported
Upload-Draft-Interop-Version: 7
HTTP/1.1 201 Created
Location: /upload/a1b2c3d4e5f6
Upload-Offset: 0
That /upload/a1b2c3d4e5f6 URL is now the handle for this upload session. The client stores it and uses it for all subsequent operations.
This is fundamentally different from how Resumable.js works today. In Resumable.js, the client generates a unique identifier from file metadata (name, size, chunk index) and sends it as a parameter with each chunk. The server uses those parameters to figure out where to put the data. The new standard moves that coordination into HTTP semantics.
Sending data with PATCH and Upload-Offset
Once the upload resource exists, the client sends file data using PATCH:
PATCH /upload/a1b2c3d4e5f6 HTTP/1.1
Host: files.example.com
Upload-Offset: 0
Upload-Complete: ?0
Content-Length: 1048576
Content-Type: application/offset+octet-stream
[1 MB of binary data]
The Upload-Offset header tells the server exactly where this data begins in the overall file. The server appends the bytes starting at that offset and responds with the new offset:
HTTP/1.1 204 No Content
Upload-Offset: 1048576
The client increments and sends the next chunk:
PATCH /upload/a1b2c3d4e5f6 HTTP/1.1
Upload-Offset: 1048576
Upload-Complete: ?0
Content-Length: 1048576
Content-Type: application/offset+octet-stream
[next 1 MB]
When the client sends the final piece, it sets Upload-Complete: ?1:
PATCH /upload/a1b2c3d4e5f6 HTTP/1.1
Upload-Offset: 9437184
Upload-Complete: ?1
Content-Length: 524288
Content-Type: application/offset+octet-stream
[last 512 KB]
The server knows the upload is done and can finalize processing.
Querying offset with HEAD
If the connection drops mid-upload, the client needs to know how much the server actually received. A simple HEAD request to the upload resource returns the current state:
HEAD /upload/a1b2c3d4e5f6 HTTP/1.1
Host: files.example.com
HTTP/1.1 204 No Content
Upload-Offset: 3145728
Upload-Complete: ?0
The server received 3 MB. The client resumes from offset 3,145,728. No need to re-upload what's already there.
This is cleaner than the testChunks approach used by Resumable.js, where the client sends a GET request for each chunk to check if it exists on the server. The standard reduces resume negotiation to a single round-trip.
Comparison: HTTP standard vs. Resumable.js vs. TUS
| Feature | HTTP Resumable Upload Standard | Resumable.js | TUS Protocol |
|---|---|---|---|
| Protocol layer | HTTP-native (IETF draft) | Application-level conventions | Application-level protocol |
| Resume mechanism | HEAD → Upload-Offset | GET testChunks per chunk | HEAD → Upload-Offset |
| Data method | PATCH with offset | POST multipart per chunk | PATCH with offset |
| Upload identity | Server-assigned URL | Client-generated identifier | Server-assigned URL |
| Server requirements | Draft-compliant HTTP server | Any server that handles POST | TUS-compliant server |
| Chunk ordering | Sequential (offset-based) | Parallel or sequential | Sequential (offset-based) |
| Browser support | Fetch API (partial) | Full (File API + XHR/Fetch) | Full (via tus-js-client) |
| Spec maturity | Draft (active development) | Stable library | 1.0 specification |
| Proxy/CDN awareness | Goal of the standard | Transparent to intermediaries | Requires end-to-end support |
Why adoption is still early
The standard is technically sound, but the ecosystem isn't there yet.
Browser gaps. The 104 informational response handling is inconsistent across browsers. Some fetch implementations don't surface informational responses to JavaScript at all. The PATCH method with application/offset+octet-stream works fine, but the full creation flow with 104 requires careful feature detection.
CDN and proxy behavior. The whole point of an HTTP-native standard is that intermediaries can understand and optimize the traffic. In practice, most CDNs and reverse proxies today don't recognize the resumable upload headers. Some will strip unknown headers. Others will buffer PATCH bodies in ways that defeat streaming. Cloudflare, Fastly, and AWS CloudFront would all need explicit support before the standard delivers on its promise of intermediary awareness.
Server library maturity. As of early 2026, server implementations are experimental. There's no production-grade Node.js middleware, no battle-tested Go library, no Django or Rails plugin you can drop in. The reference implementations work for testing but aren't what you'd put behind a high-traffic upload endpoint.
Incomplete upload cleanup. The draft defines how to create and append to uploads, but the lifecycle management — when to garbage-collect abandoned uploads, how to set expiration policies — is left to the server. That's reasonable for a protocol spec, but it means every implementation has to solve the same operational problems independently.
What this means for Resumable.js users
If you're running Resumable.js in production today, nothing changes for you right now.
Resumable.js uses a model that's been proven across millions of uploads: the client slices the file using the File API, sends each chunk as a standard multipart POST, and the server reassembles based on chunk identifiers. That model works with any HTTP server, any proxy, any CDN. You don't need special server middleware. You don't need protocol-aware intermediaries. You configure chunkSize, point target at your endpoint, and handle the chunks however your backend requires — see the configuration reference for the full set of options.
The library's chunking model gives you parallel uploads, per-chunk retry with backoff, and chunk-level resume that the sequential offset model in the new standard doesn't natively support. If a single chunk fails, Resumable.js retries that chunk. With the HTTP standard, you'd need to re-send from the last successful offset, which could mean re-transmitting data that a parallel approach wouldn't.
The retry and resume logic in Resumable.js is mature and configurable. You can tune maxChunkRetries, set chunkRetryInterval, and handle resume across browser sessions by persisting the file identifier. The new standard will eventually offer a cleaner resume handshake, but Resumable.js gives you that capability today with production-grade reliability.
Looking ahead
The HTTP resumable upload standard is worth watching. When browser support matures and CDNs add native awareness, it could simplify the upload stack significantly — particularly for simple sequential uploads where you don't need parallel chunking or custom chunk routing.
For complex upload pipelines — large files, parallel chunks, custom server logic, progress tracking per chunk — libraries like Resumable.js will continue to provide capabilities that a protocol-level standard intentionally leaves out of scope. The standard defines the wire format. The library handles the developer experience.
Keep an eye on the draft. Build with what works today.
