Serverless upload receivers make sense when your traffic is bursty. A video platform that gets 50 uploads at 2 PM and zero at 3 AM shouldn't pay for always-on servers during the quiet hours. AWS Lambda scales to zero, handles bursts automatically, and pairs naturally with S3's multipart upload API. Resumable.js on the client provides chunking, retry, and resume. Together they form a serverless upload pipeline that handles files from megabytes to gigabytes.
This guide covers two architectures: Lambda as a chunk proxy (simpler, more limitations) and presigned URLs for direct-to-S3 uploads (more setup, fewer constraints). Both use Resumable.js on the client and S3 multipart uploads for assembly.
Architecture: Lambda as chunk proxy
The straightforward approach:
- Resumable.js slices the file and POSTs each chunk to an API Gateway endpoint.
- API Gateway forwards the request to a Lambda function.
- Lambda takes the chunk bytes, calls S3's
UploadPart, and stores the part. - When all chunks arrive, Lambda calls
CompleteMultipartUpload.
This works well for chunk sizes up to about 10 MB — the API Gateway payload limit. For larger chunks, you need presigned URLs (covered below).
Lambda chunk receiver
Here's a Node.js Lambda function that handles chunk uploads and resume checks:
import {
S3Client,
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand,
ListPartsCommand
} from '@aws-sdk/client-s3';
import { DynamoDBClient, GetItemCommand, PutItemCommand, DeleteItemCommand } from '@aws-sdk/client-dynamodb';
const s3 = new S3Client({ region: process.env.AWS_REGION });
const dynamo = new DynamoDBClient({ region: process.env.AWS_REGION });
const BUCKET = process.env.UPLOAD_BUCKET;
const TABLE = process.env.STATE_TABLE;
export async function handler(event) {
const params = event.queryStringParameters || {};
const resumableId = params.resumableIdentifier;
const chunkNumber = parseInt(params.resumableChunkNumber);
const totalChunks = parseInt(params.resumableTotalChunks);
const filename = params.resumableFilename;
// GET = testChunks (resume check)
if (event.httpMethod === 'GET') {
return handleTestChunk(resumableId, chunkNumber);
}
// POST = chunk upload
if (event.httpMethod === 'POST') {
return handleChunkUpload(
event, resumableId, chunkNumber, totalChunks, filename
);
}
return { statusCode: 405, body: 'Method Not Allowed' };
}
Multipart upload management
Each file needs an S3 multipart upload. We store the uploadId in DynamoDB, keyed by the Resumable.js identifier:
async function getOrCreateUpload(resumableId, filename) {
const key = `uploads/${resumableId}/${filename}`;
// Check DynamoDB for existing upload
const existing = await dynamo.send(new GetItemCommand({
TableName: TABLE,
Key: { pk: { S: `upload#${resumableId}` } }
}));
if (existing.Item) {
return {
uploadId: existing.Item.uploadId.S,
key: existing.Item.s3Key.S
};
}
// Create new multipart upload
const multipart = await s3.send(new CreateMultipartUploadCommand({
Bucket: BUCKET,
Key: key,
ContentType: 'application/octet-stream'
}));
// Store state
await dynamo.send(new PutItemCommand({
TableName: TABLE,
Item: {
pk: { S: `upload#${resumableId}` },
uploadId: { S: multipart.UploadId },
s3Key: { S: key },
ttl: { N: String(Math.floor(Date.now() / 1000) + 86400) }
}
}));
return { uploadId: multipart.UploadId, key };
}
The DynamoDB TTL field automatically cleans up state for abandoned uploads after 24 hours. You should also set up an S3 lifecycle rule to abort incomplete multipart uploads — otherwise orphaned parts accumulate and you pay for storage you're not using.
Uploading a part
async function handleChunkUpload(event, resumableId, chunkNumber, totalChunks, filename) {
const { uploadId, key } = await getOrCreateUpload(resumableId, filename);
// Decode body — API Gateway sends base64 for binary
const body = Buffer.from(event.body, event.isBase64Encoded ? 'base64' : 'utf8');
// Upload part to S3
const partResult = await s3.send(new UploadPartCommand({
Bucket: BUCKET,
Key: key,
UploadId: uploadId,
PartNumber: chunkNumber,
Body: body
}));
// Record the part ETag
await dynamo.send(new PutItemCommand({
TableName: TABLE,
Item: {
pk: { S: `part#${resumableId}#${chunkNumber}` },
etag: { S: partResult.ETag },
partNumber: { N: String(chunkNumber) },
ttl: { N: String(Math.floor(Date.now() / 1000) + 86400) }
}
}));
// Check completion
const uploadedCount = await countUploadedParts(resumableId, totalChunks);
if (uploadedCount === totalChunks) {
await completeUpload(resumableId, key, uploadId, totalChunks);
}
return { statusCode: 200, body: 'OK' };
}
Resume check
async function handleTestChunk(resumableId, chunkNumber) {
const result = await dynamo.send(new GetItemCommand({
TableName: TABLE,
Key: { pk: { S: `part#${resumableId}#${chunkNumber}` } }
}));
if (result.Item) {
return { statusCode: 200, body: 'Found' };
}
return { statusCode: 204, body: 'Not Found' };
}
Completing the multipart upload
async function completeUpload(resumableId, key, uploadId, totalChunks) {
const parts = [];
for (let i = 1; i <= totalChunks; i++) {
const part = await dynamo.send(new GetItemCommand({
TableName: TABLE,
Key: { pk: { S: `part#${resumableId}#${i}` } }
}));
parts.push({
PartNumber: parseInt(part.Item.partNumber.N),
ETag: part.Item.etag.S
});
}
await s3.send(new CompleteMultipartUploadCommand({
Bucket: BUCKET,
Key: key,
UploadId: uploadId,
MultipartUpload: { Parts: parts }
}));
}
Part sizing: the 5 MB constraint
S3 requires every multipart upload part (except the last) to be at least 5 MB. This means your Resumable.js chunkSize should be 5 MB or larger:
const r = new Resumable({
target: 'https://api.example.com/upload',
chunkSize: 5 * 1024 * 1024, // 5 MB minimum for S3
forceChunkSize: true,
simultaneousUploads: 3,
testChunks: true
});
Set forceChunkSize: true to ensure consistent part sizes. Without it, the last chunk will naturally be smaller (which is fine — S3 allows the last part to be under 5 MB), but intermediate chunks might vary if the file size doesn't divide evenly.
S3 also caps multipart uploads at 10,000 parts. With 5 MB chunks, that's a maximum file size of ~48.8 GB. For larger files, increase chunkSize. With 100 MB chunks, you can upload up to ~976 GB. See the configuration reference for all chunk-related options and the S3 and object storage guide for deeper coverage of S3 multipart constraints.
API Gateway payload limits
API Gateway has a hard 10 MB payload limit for Lambda proxy integrations. If your chunkSize is 5 MB, the actual request (with multipart form encoding overhead) is slightly larger but still under 10 MB. If you want chunks larger than ~9 MB, API Gateway won't work as a proxy.
The solution: presigned URLs.
Presigned URL approach
Instead of proxying chunk data through Lambda, have Lambda generate presigned URLs and let the client upload directly to S3:
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
async function generatePresignedUrl(resumableId, chunkNumber, filename) {
const { uploadId, key } = await getOrCreateUpload(resumableId, filename);
const command = new UploadPartCommand({
Bucket: BUCKET,
Key: key,
UploadId: uploadId,
PartNumber: chunkNumber
});
const url = await getSignedUrl(s3, command, { expiresIn: 3600 });
return {
statusCode: 200,
body: JSON.stringify({ url, uploadId, key })
};
}
On the client side, you'd use Resumable.js's preprocess callback or a custom upload function to first request the presigned URL from your Lambda, then PUT the chunk data directly to S3. This bypasses API Gateway's size limit entirely — S3 accepts parts up to 5 GB.
The trade-off is more client-side complexity and CORS configuration on the S3 bucket. But for large files, it's the right approach.
Transfer acceleration
S3 Transfer Acceleration uses CloudFront edge locations to speed up uploads across long distances. Enable it on the bucket:
aws s3api put-bucket-accelerate-configuration \
--bucket my-uploads \
--accelerate-configuration Status=Enabled
Then use the acceleration endpoint in your Resumable.js config or presigned URLs:
const s3Accelerated = new S3Client({
region: 'us-east-1',
useAccelerateEndpoint: true
});
This routes upload traffic through the nearest CloudFront edge location, which can significantly reduce latency for users far from your S3 region. It works with both the Lambda proxy approach and presigned URLs.
For a comparison of how this architecture stacks up against edge-native approaches, see the Cloudflare Workers + R2 example — which eliminates the need for Transfer Acceleration by running the receiver at the edge natively.
Client configuration
The client-side setup is minimal. For the Lambda proxy approach:
const r = new Resumable({
target: 'https://api.example.com/upload',
chunkSize: 5 * 1024 * 1024,
simultaneousUploads: 3,
testChunks: true,
maxChunkRetries: 3,
chunkRetryInterval: 2000
});
r.assignBrowse(document.getElementById('file-input'));
r.on('fileAdded', () => r.upload());
r.on('fileSuccess', (file) => console.log('Done:', file.fileName));
The basic uploader example covers the fundamentals. The server receivers guide has patterns for the server side that apply to Lambda handlers.
DynamoDB table design
A single-table design works well:
| pk | Attributes |
|---|---|
upload#<resumableId> | uploadId, s3Key, ttl |
part#<resumableId>#<chunkNumber> | etag, partNumber, ttl |
Enable TTL on the ttl attribute to auto-clean abandoned uploads. This is your defense against state accumulation — without it, DynamoDB fills up with records for uploads that were never completed.
What you get
A serverless upload pipeline that scales from zero to thousands of concurrent uploads without provisioning servers. Lambda handles the coordination, S3 handles the storage, DynamoDB handles the state, and Resumable.js handles the client experience. You pay only for what you use: Lambda invocations, S3 storage, and DynamoDB read/write capacity.
For bursty upload workloads — where traffic spikes for hours then drops to nothing — this architecture is hard to beat on cost efficiency.
