Running your upload receiver at the edge means every user on the planet hits a server that's geographically close. No single-region bottleneck. No cross-ocean latency on every chunk. Cloudflare Workers give you compute in 300+ locations, and R2 gives you S3-compatible object storage with zero egress fees. Together with Resumable.js on the client, you get a chunked upload pipeline that's fast, resumable, and globally distributed.
This guide walks through the full setup: client configuration, Worker chunk handler, R2 multipart upload assembly, and resume support.
Architecture overview
The flow:
- Resumable.js in the browser slices the file and sends each chunk as a POST to your Worker endpoint.
- Cloudflare Worker receives each chunk, initiates (or continues) an R2 multipart upload, and stores the part.
- R2 holds the parts. When all chunks arrive, the Worker completes the multipart upload and the file is assembled.
- Resume: if the upload is interrupted, Resumable.js sends GET requests (
testChunks) for each chunk. The Worker checks R2 (or KV) to determine which parts already exist.
The client doesn't know or care that it's talking to an edge function. It's just POSTing chunks to a URL.
Client-side setup
Standard Resumable.js configuration, pointed at your Worker's route:
const r = new Resumable({
target: 'https://uploads.example.com/upload',
chunkSize: 5 * 1024 * 1024, // 5 MB — R2 minimum part size
simultaneousUploads: 3,
testChunks: true,
maxChunkRetries: 3,
chunkRetryInterval: 2000,
headers: {
'Authorization': `Bearer ${getUploadToken()}`
}
});
r.assignBrowse(document.getElementById('file-input'));
r.assignDrop(document.getElementById('drop-zone'));
r.on('fileAdded', (file) => {
console.log(`Uploading: ${file.fileName} (${file.size} bytes)`);
r.upload();
});
r.on('fileSuccess', (file) => {
console.log(`Complete: ${file.fileName}`);
});
r.on('fileError', (file, message) => {
console.error(`Failed: ${file.fileName}`, message);
});
The chunkSize is set to 5 MB because R2 (like S3) requires multipart upload parts to be at least 5 MB, except for the last part. This alignment is important — see the configuration reference for all available options and the basic uploader example for a minimal working setup.
You'll also want to configure CORS on your Worker. See the CORS guide for the headers Resumable.js expects.
Worker: handling chunk uploads
The Worker needs to handle two HTTP methods: GET (for testChunks resume checks) and POST (for chunk data). Here's the core structure:
// src/worker.ts
export interface Env {
UPLOADS: R2Bucket;
UPLOAD_STATE: KVNamespace;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// CORS preflight
if (request.method === 'OPTIONS') {
return handleCors();
}
if (url.pathname === '/upload') {
if (request.method === 'GET') {
return handleTestChunk(url, env);
}
if (request.method === 'POST') {
return handleChunkUpload(request, url, env);
}
}
return new Response('Not Found', { status: 404 });
}
};
Initiating the multipart upload
When the first chunk arrives for a new file, the Worker needs to create an R2 multipart upload. Subsequent chunks join that same upload. We store the uploadId in KV, keyed by the Resumable.js file identifier:
async function getOrCreateMultipartUpload(
env: Env,
resumableId: string,
filename: string
): Promise<{ uploadId: string; key: string }> {
const stateKey = `upload:${resumableId}`;
const existing = await env.UPLOAD_STATE.get(stateKey, 'json') as {
uploadId: string;
key: string;
} | null;
if (existing) {
return existing;
}
const key = `uploads/${resumableId}/${filename}`;
const multipart = await env.UPLOADS.createMultipartUpload(key);
const state = { uploadId: multipart.uploadId, key };
await env.UPLOAD_STATE.put(stateKey, JSON.stringify(state), {
expirationTtl: 86400 // 24-hour TTL for abandoned uploads
});
return state;
}
Uploading a part
Each chunk POST becomes an R2 uploadPart call. The Resumable.js chunk number maps directly to the R2 part number:
async function handleChunkUpload(
request: Request,
url: URL,
env: Env
): Promise<Response> {
const formData = await request.formData();
const file = formData.get('file') as File;
const resumableId = url.searchParams.get('resumableIdentifier') || '';
const chunkNumber = parseInt(url.searchParams.get('resumableChunkNumber') || '1');
const totalChunks = parseInt(url.searchParams.get('resumableTotalChunks') || '1');
const filename = url.searchParams.get('resumableFilename') || 'unknown';
const { uploadId, key } = await getOrCreateMultipartUpload(
env, resumableId, filename
);
// Upload this part to R2
const multipart = env.UPLOADS.resumeMultipartUpload(key, uploadId);
const partBytes = await file.arrayBuffer();
const uploadedPart = await multipart.uploadPart(chunkNumber, partBytes);
// Store part ETag for later completion
const partKey = `part:${resumableId}:${chunkNumber}`;
await env.UPLOAD_STATE.put(partKey, JSON.stringify({
partNumber: uploadedPart.partNumber,
etag: uploadedPart.etag
}), { expirationTtl: 86400 });
// Check if all parts are uploaded
if (await allPartsUploaded(env, resumableId, totalChunks)) {
await completeUpload(env, resumableId, key, uploadId, totalChunks);
}
return corsResponse(new Response('OK', { status: 200 }));
}
Completing the multipart upload
When all chunks are in, gather the part metadata and finalize:
async function completeUpload(
env: Env,
resumableId: string,
key: string,
uploadId: string,
totalChunks: number
): Promise<void> {
const parts: R2UploadedPart[] = [];
for (let i = 1; i <= totalChunks; i++) {
const partData = await env.UPLOAD_STATE.get(
`part:${resumableId}:${i}`, 'json'
) as { partNumber: number; etag: string };
parts.push(partData);
}
const multipart = env.UPLOADS.resumeMultipartUpload(key, uploadId);
await multipart.complete(parts);
// Clean up KV state
await env.UPLOAD_STATE.delete(`upload:${resumableId}`);
for (let i = 1; i <= totalChunks; i++) {
await env.UPLOAD_STATE.delete(`part:${resumableId}:${i}`);
}
}
Handling testChunks for resume
When Resumable.js sends a GET request to check if a chunk exists, the Worker looks up the part in KV:
async function handleTestChunk(
url: URL,
env: Env
): Promise<Response> {
const resumableId = url.searchParams.get('resumableIdentifier') || '';
const chunkNumber = url.searchParams.get('resumableChunkNumber') || '1';
const partKey = `part:${resumableId}:${chunkNumber}`;
const exists = await env.UPLOAD_STATE.get(partKey);
if (exists) {
// Chunk already uploaded — skip it
return corsResponse(new Response('Found', { status: 200 }));
}
// Chunk not found — upload it
return corsResponse(new Response('Not Found', { status: 204 }));
}
Resumable.js interprets a 200 response as "chunk exists, skip it" and a non-200 (like 204 or 404) as "chunk needed." This is configurable via the permanentErrors and testChunks options.
Storing chunk state: KV vs. R2 metadata
The example above uses Workers KV to track upload and part state. KV is fast for lookups and supports TTL-based expiration, which automatically cleans up abandoned uploads. The trade-off is KV's eventual consistency — in rare cases, a part written in one region might not be immediately visible in another. For upload state tracking, this is almost never a problem because the same client hits the same Worker instance during an upload session.
An alternative is querying R2 directly to check for existing parts using listParts on the multipart upload. This is strongly consistent but adds latency to each testChunks request. For files with many chunks, KV is faster.
Wrangler configuration
Your wrangler.toml needs the R2 bucket and KV namespace bindings:
name = "upload-worker"
main = "src/worker.ts"
compatibility_date = "2026-03-01"
[[r2_buckets]]
binding = "UPLOADS"
bucket_name = "my-uploads"
[[kv_namespaces]]
binding = "UPLOAD_STATE"
id = "abc123def456"
Worker limitations to watch
Request body size. Workers have a 100 MB request body limit on the paid plan (free plan is lower). Your Resumable.js chunkSize must stay under this. The 5 MB default for R2 compatibility is well within bounds.
CPU time. Workers get 30 seconds of CPU time on paid plans. Parsing a multipart form body and uploading a 5 MB part to R2 typically takes well under 1 second of CPU time. You're fine unless you're doing heavy computation on the chunk data.
Subrequest limits. Each Worker invocation can make up to 1,000 subrequests (KV reads, R2 operations, etc.). A single chunk upload touches 2-3 subrequests. The completion step for a file with 1,000 chunks would hit the limit — for very large files, consider batching KV reads or using R2's listParts API instead.
KV write limits. KV allows up to 1,000 writes per second per namespace. If you're handling many concurrent uploads, monitor this. For higher throughput, shard across multiple KV namespaces or use Durable Objects for per-upload state.
Storage considerations
R2 is S3-compatible, so the patterns from the S3 and object storage guide apply here — multipart upload lifecycle, incomplete upload cleanup, and access control. The key difference is zero egress fees: serving uploaded files from R2 costs nothing for bandwidth, which matters if your upload pipeline also serves the files back to users.
Set up lifecycle rules in R2 to automatically abort incomplete multipart uploads after a reasonable period (24-48 hours). This prevents orphaned parts from accumulating storage costs.
What you get
A globally distributed upload receiver that runs in 300+ locations, handles chunked uploads with full resume support, and stores files in zero-egress object storage. No origin server to maintain. No region to choose. The Worker handles the protocol, R2 handles the bytes, and Resumable.js handles the client experience.
