Choosing object storage for an upload pipeline isn't a theoretical exercise. It directly affects your per-upload cost, your multipart assembly behavior, how you clean up failed uploads, and how much you pay when users download what they uploaded. For large file workloads — video, medical imaging, CAD files, datasets — the choice between S3, Cloudflare R2, and Backblaze B2 has real operational and financial consequences.
This comparison focuses on what matters for chunked upload pipelines powered by Resumable.js: multipart upload mechanics, size constraints, egress pricing, and the operational details that show up at scale.
Amazon S3
S3 is the default. Not because it's cheapest — it isn't — but because it's the most mature object storage service, and virtually everything in the AWS ecosystem integrates with it natively.
Multipart upload mechanics
S3's multipart upload API is the reference implementation that R2 and B2 both emulate. The flow: CreateMultipartUpload → UploadPart (repeated) → CompleteMultipartUpload. Each part gets an ETag, and you provide the full part list with ETags when completing.
Constraints:
- Minimum part size: 5 MB (except the last part)
- Maximum part size: 5 GB
- Maximum parts per upload: 10,000
- Maximum object size: 5 TB
With Resumable.js, you'll typically set chunkSize to 5 MB or larger. At 5 MB per chunk, the 10,000-part limit gives you a maximum file size of ~48.8 GB. For larger files, increase chunkSize — at 50 MB per chunk, you reach ~488 GB. The S3 and object storage guide covers these calculations in detail.
Pricing (us-east-1, standard tier)
- Storage: $0.023/GB/month
- PUT requests: $0.005 per 1,000 requests
- GET requests: $0.0004 per 1,000 requests
- Egress: $0.09/GB (first 10 TB/month), decreasing at higher tiers. Free to CloudFront.
The egress cost is the number that dominates the conversation. If users upload 1 TB of video and then stream it back, that 1 TB of egress costs $90/month from S3 directly. This is the primary reason R2 and B2 exist as competitors.
Operational maturity
S3 has everything: lifecycle policies, intelligent tiering, event notifications (to Lambda, SNS, SQS, EventBridge), S3 Object Lock, versioning, replication, Transfer Acceleration, server-side encryption with KMS or customer-managed keys, and IAM policies that are as granular as they are verbose. If you need a feature for managing uploaded objects, S3 probably has it.
For serverless upload architectures on AWS, see the Lambda + S3 example.
Cloudflare R2
R2 is Cloudflare's answer to S3 egress pricing. It's S3-compatible (enough that the AWS SDK works with it) and charges zero egress fees. For workloads where uploaded files are frequently accessed — previews, thumbnails, streaming — R2 can dramatically reduce costs.
Multipart upload mechanics
R2 implements the S3 multipart upload API. Same CreateMultipartUpload/UploadPart/CompleteMultipartUpload flow. Same ETag-based completion.
Constraints:
- Minimum part size: 5 MB (except last part)
- Maximum part size: 5 GB
- Maximum parts per upload: 10,000
- Maximum object size: 5 TB
Functionally identical to S3 for multipart behavior. Your Resumable.js chunkSize configuration doesn't need to change when switching from S3 to R2.
Pricing
- Storage: $0.015/GB/month
- Class A operations (writes): $4.50 per million
- Class B operations (reads): $0.36 per million
- Egress: $0.00/GB. Zero. Free.
For an upload pipeline that also serves files, the egress savings are significant. A 1 TB/month download workload costs $0 from R2 versus $90 from S3.
Workers integration
R2's killer feature for upload pipelines is native Workers integration. Your upload receiver runs at the edge, reads and writes R2 directly, and there's no inter-region latency penalty. The Cloudflare Workers + R2 example demonstrates this full pattern with Resumable.js.
Limitations
R2 doesn't have feature parity with S3. As of early 2026:
- No event notifications (you poll or use Workers for event-driven flows)
- No object versioning
- Limited lifecycle rule options compared to S3
- No intelligent tiering or storage classes
- Single-region storage (Cloudflare manages placement, you don't choose)
- Eventual consistency on some operations (though most are strongly consistent)
For upload pipelines specifically, these gaps rarely matter. You're writing objects, occasionally listing them, and reading them back. The core operations work.
Backblaze B2
B2 is the budget option. It's been around since 2015, offers S3-compatible APIs (added in 2020), and has the lowest storage costs of the three. For large archival uploads where storage cost dominates, B2 is worth considering.
Multipart upload mechanics
B2 supports two upload APIs: its native "Large File" API and the S3-compatible multipart API.
Native Large File API:
- Minimum part size: 5 MB
- Maximum part size: 5 GB
- Maximum parts: 10,000
- Minimum file size for large file: 5 MB (files under this use the regular upload API)
S3-compatible API:
- Same constraints as native, mapped to S3 semantics
CreateMultipartUpload/UploadPart/CompleteMultipartUploadwork with the AWS SDK
The S3-compatible endpoint means you can point the same server code at B2 that you'd use for S3 — change the endpoint URL and credentials, keep everything else. For Resumable.js, the client configuration is identical regardless of which backend you use.
Pricing
- Storage: $0.006/GB/month
- Class A transactions (writes): Free (2,500/day on free tier, then $0.004 per 10,000)
- Class B transactions (reads): Free (2,500/day on free tier, then $0.004 per 10,000)
- Egress: $0.01/GB. But free via Cloudflare through the Bandwidth Alliance.
That last point matters. If you serve B2 objects through Cloudflare (CDN or Workers), egress is free. This gives you B2's storage pricing ($0.006/GB) with R2-like egress costs ($0.00/GB) — the cheapest combination available.
Limitations
B2's S3-compatible API has some behavioral differences:
- Incomplete multipart upload listing can be slower
- No server-side copy between buckets (you download and re-upload)
- IAM is simpler — application keys with bucket-level scope, not the granular policy language of S3
- Region selection is limited (US West, US East, EU Central)
- No event notifications or lifecycle transitions to cheaper tiers (there's only one tier)
For pure upload-and-store workloads, these limitations rarely matter. For complex workflows with event-driven processing, you'll feel the gaps.
Comparison table
| Feature | Amazon S3 | Cloudflare R2 | Backblaze B2 |
|---|---|---|---|
| Max object size | 5 TB | 5 TB | 10 TB (native), 5 TB (S3 compat) |
| Min part size | 5 MB | 5 MB | 5 MB |
| Max parts | 10,000 | 10,000 | 10,000 |
| Max part size | 5 GB | 5 GB | 5 GB |
| Storage cost | $0.023/GB/mo | $0.015/GB/mo | $0.006/GB/mo |
| Egress cost | $0.09/GB | $0.00/GB | $0.01/GB (free via CF) |
| S3 API compatible | Native | Yes | Yes |
| Multipart upload | Full support | Full support | Full support |
| Regions | 30+ regions | Cloudflare-managed | 3 regions |
| Lifecycle policies | Comprehensive | Basic | Basic |
| Event notifications | SNS, SQS, Lambda, EventBridge | None (use Workers) | None (use webhooks) |
| Encryption | SSE-S3, SSE-KMS, SSE-C | SSE (managed) | SSE (managed) |
| IAM granularity | Fine-grained policies | API tokens + bucket scope | Application keys |
Operational considerations
Incomplete upload cleanup
All three providers charge for storage consumed by incomplete multipart upload parts. If a user starts uploading a 2 GB file, uploads 1.5 GB of parts, then closes the browser, those 1.5 GB of parts sit in storage until you clean them up.
S3: Set a lifecycle rule to abort incomplete multipart uploads after N days. This is a one-time configuration and it just works.
R2: Supports AbortMultipartUpload and lifecycle rules for incomplete uploads. Set these up early — it's easy to forget and accumulate orphaned parts.
B2: The native API has b2_cancel_large_file. The S3-compatible API supports AbortMultipartUpload. B2 also auto-cancels uploads that have been inactive for 24 hours on the free tier, but you should still have explicit cleanup for paid accounts.
For all three, pair provider-side cleanup with application-level tracking. If you're using DynamoDB, KV, or a database to track upload state, set TTLs on those records too.
Access control
For upload endpoints, you typically need write-only access for the upload receiver and read access for serving files. Each provider handles this differently:
- S3: IAM policies with
s3:PutObject,s3:GetObject, scoped to specific prefixes. Presigned URLs for direct client uploads. - R2: API tokens with read/write scope per bucket. Workers use bindings (no credentials in code).
- B2: Application keys scoped to specific buckets and prefixes.
Request rate limits
Under sustained upload load, you may hit request rate limits. S3 handles 5,500 GET and 3,500 PUT requests per second per partitioned prefix. R2 and B2 have lower documented limits but autoscale for most workloads. If you're doing hundreds of concurrent multipart uploads, partition your object keys across prefixes. See the rate limits guide for strategies.
Which fits when
Choose S3 when you're already in the AWS ecosystem, need advanced features (versioning, event notifications, intelligent tiering, cross-region replication), or require specific regulatory compliance certifications. The egress cost is the tax you pay for the ecosystem. If your upload pipeline feeds into Lambda, SQS, or other AWS services, S3 is the natural choice.
Choose R2 when egress cost matters — which it does for most upload pipelines that also serve files. The zero-egress pricing, Workers integration, and S3 compatibility make it the strongest choice for edge-first architectures or any workload where users download what they upload. If you're already using Cloudflare, R2 is an easy win.
Choose B2 when storage cost is the primary concern and you're uploading large volumes of data that won't be frequently accessed — backups, archives, media masters. Pair it with Cloudflare CDN for free egress. B2 is the cheapest per-GB option, and the S3-compatible API means your code works without changes.
The practical answer
For most upload pipelines, R2 hits the sweet spot: lower storage cost than S3, zero egress, and S3-compatible APIs that let you migrate without rewriting your upload handler. If you're building on AWS and need the ecosystem, S3 is the pragmatic choice — just budget for egress. B2 is the right call for cost-optimized archival storage, especially behind Cloudflare.
Your Resumable.js client configuration stays the same regardless of backend. The chunkSize, simultaneousUploads, and testChunks settings don't care where the bytes end up. The caching guide covers how to optimize delivery of uploaded assets once they're stored, regardless of which provider you choose.
Pick the storage that fits your cost model and operational requirements. The upload pipeline is the same either way.
