Progressive web app diagram showing offline upload queue with background sync reconnection flow

Offline-Friendly Uploads: Background Sync in PWAs

Implement offline-resilient file uploads using service workers, Background Sync API, and IndexedDB queuing — with realistic browser support guidance for progressive web apps.

Guides·Updated 2026-04-08

Users don't always have stable connections. They might be in an elevator, on a train, or on a spotty café Wi-Fi that drops every few minutes. When connectivity goes away during an upload, the default behavior in most web apps is silence — the request fails, maybe an error appears, and the user has to start over. For small form submissions that's annoying. For large file uploads it's unacceptable.

Resumable.js already handles the "resume" part — if the page stays open and the connection comes back, you can retry failed chunks and pick up where you left off. But what about when the user closes the tab, or the browser kills the page in the background? That's where service workers, the Background Sync API, and IndexedDB come in. Together, they let you queue uploads that survive page closure and retry them automatically when the device comes back online.

This guide covers the patterns, the code, and — critically — what actually works in production versus what's aspirational.

Service Workers for Upload Scenarios

A service worker is a script that runs in the background, separate from your web page. It intercepts network requests, caches responses, and can execute code even when no tab is open. For uploads, the relevant capability is receiving events from the browser when connectivity returns.

The service worker doesn't replace Resumable.js. It operates alongside it: Resumable.js handles chunked uploading while the page is open, and the service worker handles retry orchestration when the page is closed or the connection drops.

Registering a basic service worker:

// main.js
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/sw.js')
    .then(reg => console.log('SW registered:', reg.scope))
    .catch(err => console.error('SW registration failed:', err));
}

Background Sync API

Background Sync lets you defer an action until the user has connectivity. You register a sync event from your page, and the browser fires a sync event in your service worker when a connection is available — even if the page that registered it is no longer open.

Registering a sync event

// In your page/app code
async function queueUploadForSync(uploadMeta) {
  // Store the upload metadata in IndexedDB first
  await saveToUploadQueue(uploadMeta);

  // Register the sync event
  const reg = await navigator.serviceWorker.ready;
  await reg.sync.register('upload-queue');
}

Handling the sync event in the service worker

// sw.js
self.addEventListener('sync', (event) => {
  if (event.tag === 'upload-queue') {
    event.waitUntil(processUploadQueue());
  }
});

async function processUploadQueue() {
  const queue = await getUploadQueue(); // Read from IndexedDB

  for (const item of queue) {
    try {
      await uploadChunks(item);
      await removeFromQueue(item.id);
    } catch (err) {
      // If it fails, the browser will retry the sync event later
      throw err; // Important: re-throw so the browser knows it failed
    }
  }
}

The key behavior: if processUploadQueue() throws, the browser will fire the sync event again later. You don't control when — the browser decides based on heuristics around connectivity, battery state, and other factors. This is both the power and the limitation of Background Sync.

IndexedDB as the Upload Queue

You need a persistent store for upload state that survives page closure. localStorage can't store binary data efficiently and has a ~5 MB limit. IndexedDB can store large blobs and structured data.

The upload queue stores metadata about each pending upload: the file reference (as a Blob or File), which chunks have completed, the target URL, and any authentication tokens needed.

// db.js — IndexedDB helpers
const DB_NAME = 'upload-queue-db';
const STORE_NAME = 'uploads';

function openDB() {
  return new Promise((resolve, reject) => {
    const request = indexedDB.open(DB_NAME, 1);
    request.onupgradeneeded = (e) => {
      e.target.result.createObjectStore(STORE_NAME, { keyPath: 'id' });
    };
    request.onsuccess = () => resolve(request.result);
    request.onerror = () => reject(request.error);
  });
}

async function saveToUploadQueue(uploadMeta) {
  const db = await openDB();
  return new Promise((resolve, reject) => {
    const tx = db.transaction(STORE_NAME, 'readwrite');
    tx.objectStore(STORE_NAME).put(uploadMeta);
    tx.oncomplete = resolve;
    tx.onerror = () => reject(tx.error);
  });
}

async function getUploadQueue() {
  const db = await openDB();
  return new Promise((resolve, reject) => {
    const tx = db.transaction(STORE_NAME, 'readonly');
    const request = tx.objectStore(STORE_NAME).getAll();
    request.onsuccess = () => resolve(request.result);
    request.onerror = () => reject(request.error);
  });
}

async function removeFromQueue(id) {
  const db = await openDB();
  return new Promise((resolve, reject) => {
    const tx = db.transaction(STORE_NAME, 'readwrite');
    tx.objectStore(STORE_NAME).delete(id);
    tx.oncomplete = resolve;
    tx.onerror = () => reject(tx.error);
  });
}

What to store

A queue entry looks something like:

{
  id: 'upload-abc123',
  file: fileBlob,           // The actual File/Blob object
  fileName: 'report.pdf',
  totalSize: 52428800,
  chunkSize: 5242880,
  completedChunks: [1, 2, 3, 4],  // Chunk numbers already uploaded
  totalChunks: 10,
  targetUrl: '/api/upload',
  resumableIdentifier: 'abc123-report-pdf-52428800',
  createdAt: Date.now()
}

Storing the actual File or Blob object in IndexedDB works — the browser serializes it using the structured clone algorithm. However, File objects from an <input> element lose their name and lastModified properties when serialized. Store those separately in the metadata.

The Full Queue Pattern

Here's how the pieces connect:

  1. User selects a file. Resumable.js starts uploading chunks.
  2. If a chunk fails due to a network error, record the upload state (completed chunks, file blob) to IndexedDB.
  3. Register a Background Sync event with reg.sync.register('upload-queue').
  4. When connectivity returns, the service worker's sync handler fires.
  5. The service worker reads the queue from IndexedDB and uploads remaining chunks.
  6. Successful chunks are tracked; the queue entry is removed when all chunks are done.

Integrating this with Resumable.js means hooking into the error events. When an upload fails due to a network error (not a server 4xx/5xx — those are application errors, not connectivity issues), you snapshot the state:

r.on('fileError', (file, message) => {
  if (!navigator.onLine) {
    queueUploadForSync({
      id: file.uniqueIdentifier,
      file: file.file,  // The underlying File object
      fileName: file.fileName,
      totalSize: file.size,
      chunkSize: r.opts.chunkSize,
      completedChunks: getCompletedChunks(file),
      totalChunks: Math.ceil(file.size / r.opts.chunkSize),
      targetUrl: r.opts.target,
      resumableIdentifier: file.uniqueIdentifier,
      createdAt: Date.now()
    });
  }
});

What Background Sync Actually Guarantees

Here's where the aspirational meets the practical. Background Sync promises that the browser will fire your sync event when connectivity returns. What it actually guarantees:

It will fire eventually. The browser has discretion on timing. It might fire immediately when the connection comes back, or it might wait minutes. On mobile, it considers battery state and network quality. There's no SLA.

It will retry on failure. If your handler throws, the browser retries with exponential backoff. But it gives up after a browser-determined number of attempts (typically 3). After that, the sync event is dropped silently.

It does not run indefinitely. The service worker has a limited execution window (typically 30 seconds to a few minutes, depending on the browser). If your upload queue has 500 MB of remaining chunks, you won't finish in one sync event. You need to upload what you can, then re-throw to trigger another sync event for the remainder.

Page doesn't need to be open. This is the genuine advantage. The sync event fires even if the user has closed all tabs of your site. The service worker is woken up specifically for this.

Browser Support Reality

As of early 2026:

  • Chrome / Edge (Chromium): Full support for Background Sync. Works reliably on both desktop and Android.
  • Firefox: Has a partial implementation behind flags. Not production-ready for Background Sync specifically, though service workers themselves work fine.
  • Safari / WebKit: Service workers are supported. Background Sync is not. Safari supports a different mechanism (BGTaskScheduler for native apps), but web Background Sync remains absent.

This means Background Sync is a Chromium feature in practice. If your audience is primarily Chrome/Edge users, it's viable. If you need cross-browser support, you need a fallback.

The Practical Fallback: Online/Offline Detection

For browsers without Background Sync, the fallback is straightforward: listen for the online event and retry manually when the page is open.

window.addEventListener('online', async () => {
  const queue = await getUploadQueue();
  if (queue.length > 0) {
    // Re-initialize Resumable.js and resume uploads
    for (const item of queue) {
      // Use testChunks to skip already-uploaded chunks
      resumeUploadFromQueue(item);
    }
  }
});

This only works while the page is open, but combined with Resumable.js's testChunks mechanism, it covers the common case: the user stays on the page, the connection blips, and the upload resumes automatically when it comes back.

For the tab-closed scenario on non-Chromium browsers, there's no web API solution. The pragmatic approach is to inform the user: "Your upload will resume when you return to this page." Store the state in IndexedDB, and when the user opens the app again, detect the queued upload and resume it. This isn't as seamless as Background Sync, but it's reliable across all browsers.

Combining with Resumable.js testChunks

The testChunks option in Resumable.js sends a GET request before each chunk upload to check whether the server already has it. This is the mechanism that makes cross-session resumption work — and it's critical for the offline queue pattern.

When the service worker (or the page after reconnection) starts uploading from the queue, it doesn't need to track which chunks were uploaded with perfect accuracy. Set testChunks: true and let the server be the source of truth. Even if your IndexedDB state is slightly stale, the GET checks will skip chunks the server already has.

This means your queue entry doesn't strictly need the completedChunks array — it's an optimization, not a requirement. With testChunks enabled, you can store just the file and the upload configuration, and let the chunk verification protocol handle the rest. See choosing optimal chunk sizes for how chunk size affects the cost of these verification requests.

What's Realistic in Production

Be honest with your users and your team about what's achievable:

Realistic: Automatic resume when the page is open and the connection returns. Works everywhere. Resumable.js handles this natively with retries.

Realistic on Chromium: Background retry after tab close using Background Sync. Covers ~70% of desktop and most Android users.

Not yet realistic cross-browser: Fully transparent offline uploads that complete in the background on all browsers and platforms. Safari's lack of Background Sync makes this impossible for a significant portion of users.

Build the offline queue. Implement Background Sync where supported. Add the online event fallback for everything else. And make sure the UI communicates honestly about what's happening — users can handle "upload paused, will resume when you're back online" far better than they handle unexplained failures.