security encryption engineering cryptography zero-knowledge architecture

How We Built a Zero-Knowledge Deadman Switch: A Technical Deep-Dive

May 13, 2026

The threat model, the wrapped master key architecture, the per-file content keys, the per-share keys, and the trigger pipeline — a transparent walkthrough of how a deadman switch can deliver your most sensitive files without anyone, including the service itself, ever being able to read them.

image-3.jpg

What "Zero-Knowledge" Actually Has to Mean

Most products that say "we encrypt your data" are telling the truth in a way that doesn't matter.

They encrypt at rest. They encrypt in transit. The encryption keys live on their servers. If they want to read your files — for analytics, for debugging, for compliance with a subpoena — they can. If their database is breached, the encrypted files and the keys to decrypt them tend to leave together.

Zero-knowledge means a stronger thing: the server cannot read your data, ever, even if it wanted to. Not because of policy. Because of math. The keys never exist on the server in unwrapped form. The decryption happens on the user's device, against ciphertext the server has been holding onto without ever being able to look inside it.

For a deadman switch — a system whose entire job is to hold your most sensitive documents and deliver them to specific people at specific moments — anything weaker than zero-knowledge is a backdoor with a friendly UI on top of it.

Here's how the architecture actually works in Killswitch.

The Threat Model First

Before any code, the threat model. We assume:

  1. The server will eventually be compromised. Maybe a breach, maybe a rogue insider, maybe a state-level subpoena, maybe an honest mistake by an engineer with shell access. We design as if "the server saw this in plaintext at some point" is a category of failure we want to make impossible.

  2. The user is not a cryptographer. They will pick a password they can remember. They will check in from multiple devices. They will lose their phone. They will forget their backup codes. The system has to work for that user, not for someone running their own HSM.

  3. The recipient may not have an account. Beneficiaries are often people who don't want to install anything. They get an email, they click a link, they download what was left for them. The crypto has to deliver across that boundary.

  4. The deadman switch must fire even if the user's payment lapses, their email expires, or every device they own is gone. Recovery cannot depend on any single piece of infrastructure the user controls.

These four constraints define everything that follows.

The Core Primitive: Wrapped Master Keys

The architecture borrows directly from password managers like 1Password and Bitwarden. We didn't invent it; we adopted it because it's been adversarially studied for over a decade and the failure modes are well understood.

Here's the structure:

Layer 1 — User password. Never sent to the server. Used locally to derive a key.

Layer 2 — Key derivation. The user's password is run through PBKDF2-SHA256 with 600,000 iterations (current OWASP guidance for SHA-256) and a random per-user 16-byte salt. The output is a 256-bit derived key. This key is not used to encrypt files directly — it's used only to wrap the master key.

Layer 3 — Master key. Generated once, at signup, in the browser via window.crypto.subtle.generateKey. A random 256-bit AES-GCM key. This is the key that protects everything else.

Layer 4 — The wrap. The master key is wrapped (encrypted) using the password-derived key, with AES-GCM and a fresh 12-byte IV, then sent to the server. The server stores the wrapped master key — encrypted bytes it cannot decrypt without the user's password, which it does not have.

When the user logs in, the wrapped master key comes down to the browser. The user's password derives the wrapping key. The wrapping key unwraps the master key. The master key now exists in browser memory. Files can be decrypted. The server still has no idea.

Why this matters for password changes. If you encrypt files directly with a password-derived key, changing your password means re-encrypting every file. With wrapped master keys, password changes re-encrypt only one tiny piece of data (the wrap), not the gigabytes of files. Password changes complete in milliseconds regardless of how much you've stored.

Why this matters for security. A compromised server gets ciphertext and a wrapped master key. Both are useless without the user's password.

Why we wrap the master key with a password-derived key, instead of using the password-derived key directly. The wrapped master key pattern decouples "what encrypts your files" from "how your password derives the wrapping key." That decoupling has two practical payoffs:

  • Password changes are instant. A new password produces a new password-derived key, which re-wraps the same master key. The master key bytes don't change, so files, content keys, and shares are all still readable. We update three small columns and we're done. Naïve designs that encrypt files directly with a password-derived key turn a password change into re-encrypting every file the user has ever uploaded.
  • KDF tuning is free. PBKDF2 iteration counts are a moving target — what's secure today will be cheap to brute-force in five years. Because the iteration count is just a parameter on the wrap, raising it doesn't touch a single file. The same one-row update that handles a password change handles an iteration bump. The server enforces strict-increase on the count, so a buggy or malicious client can't downgrade a user's posture even by accident.

This is why 1Password and Bitwarden — the products that pioneered the pattern — have been able to ratchet their KDF parameters multiple times over the last decade without ever asking users to re-encrypt anything. The same is true for Killswitch.

Encrypting Files in the Browser

Files are encrypted before they ever leave the user's machine, using AES-256-GCM via the WebCrypto API. The basic flow:

  1. User selects a file in the browser.
  2. The browser generates a random per-file content key (256-bit AES-GCM).
  3. The browser generates a random 12-byte IV (per AES-GCM best practice).
  4. The file is encrypted with the content key. AES-GCM is authenticated encryption, so the output includes an integrity tag — tampering with the ciphertext later causes decryption to fail.
  5. The content key is wrapped with the user's master key, using a fresh IV.
  6. The encrypted blob is uploaded to object storage (Digital Ocean Spaces). The wrapped content key, its IV, and the file IV are stored in the database alongside metadata.

The server sees encrypted bytes and a wrapped key. It does not see the file. It does not see the content key. It does not see the master key. The only thing in the database that connects the file to a person is the user ID — and that's the necessary minimum for retrieval.

Why per-file keys? Granular sharing and revocation. The owner can hand a specific file's content key to a specific recipient without exposing the master key. Revoking that share means the recipient can no longer get the content key, even if they still have the ciphertext.

Metadata. Filenames, descriptions, and tags would otherwise be readable on the server, which is a real privacy leak. We use AshCloak — server-side AES-256-GCM, with the key in CLOAK_KEY_BASE64 — to encrypt that metadata at rest. This is not zero-knowledge; the server can decrypt this. It's a defense-in-depth layer for database breach scenarios. The actual file contents remain protected by the zero-knowledge layer above.

The Sharing Problem

Here's where most zero-knowledge systems get hard, and where deadman switches get harder.

The recipient of a deadman switch trigger isn't a logged-in user with their own master key. They're a beneficiary — often someone who doesn't have an account, doesn't have crypto experience, and is opening an email at the worst moment of their life.

You can't just send them the master key (that would compromise everything else). You can't expect them to derive a key from a password (they didn't pick one). You can't keep the file encrypted to the original user (they're gone, and their password is unrecoverable by you).

The solution is a token-based per-share key.

When the user creates a share for a beneficiary, the browser:

  1. Unwraps the file's content key using the master key.
  2. Generates a random 256-bit access token.
  3. Encrypts the content key with the access token, using AES-256-GCM and a fresh 12-byte IV.
  4. Hashes the access token with SHA-256.
  5. Sends the server: the encrypted content key, its IV, and the SHA-256 hash of the token.
  6. The plaintext token is included only in the share URL that gets emailed to the beneficiary.

The server stores the encrypted content key and the token hash. It never stores the plaintext token.

When the deadman switch fires, the beneficiary receives an email containing a URL that includes the access token (e.g., /share/<id>?token=<token>). When they click:

  1. Their browser extracts the token from the URL.
  2. The browser hashes the token with SHA-256 and sends the hash to the server.
  3. The server compares the hash against the stored hash. If it matches, the server returns the encrypted content key and IV.
  4. The browser decrypts the content key using the token from the URL, then downloads and decrypts the file.

This design has a real, named exposure surface and a layered set of defenses for it. Both deserve a clear explanation.

The exposure. The access token travels through the URL. The browser sees it. The page's request line includes it. We strip the token from our own access logs at the application layer, but we cannot reach into a beneficiary's mail client, their corporate proxy, their browser history, or any HTTP intermediary that has touched the URL on the way through. Anyone with read access to one of those layers, after the access has occurred, could in principle reconstruct the token.

We don't pretend this isn't true. We design around it with three concrete defenses, each of which works independently of the others.

Defense 1: Instant, irreversible revocation. The owner can revoke any share at any time, with one click. Revocation does not flag the share as "revoked" while leaving the encrypted content key in place — it deletes the encrypted content key entirely. After revocation, the token is no longer the missing piece in a decryption flow; the thing it was supposed to decrypt no longer exists on our servers. An attacker holding a stolen token after revocation has a key for a lock we destroyed.

Defense 2: Real-time access notification to the owner. Every successful share access triggers an email to the owner with the time, the requesting IP address, and the geolocation we resolve from that IP. If a token is intercepted and used by someone other than the intended beneficiary, the legitimate owner sees the alert in their inbox, often within seconds, and can revoke before the attacker downloads anything. This converts the token-in-URL exposure from a silent compromise into a noisy one — and a noisy compromise is the kind defenders win.

Defense 3: A tamper-evident access log. Every access creates an activity log entry: action type ("document.share_accessed", "note.share_accessed", etc.), share ID, recipient email, accessor IP, timestamp. Owners can review the access history of any share they created. If something unexpected appears, it's evidence — and evidence is the precondition for response.

Defense 4: A click-through gate, so scanners don't trigger anything. When a beneficiary lands on a share URL, the page does not auto-decrypt. Instead it renders a confirmation gate: "View shared content. Opening this will create a record of access and notify the sender (and you) by email." Only an explicit human click runs the decryption pipeline. The reason: email-security scanners (Microsoft Defender, Mimecast, Proofpoint, and the rest) follow links to inspect them. Without the gate, every scanner click would fire the owner-alert email and create a phantom access-log entry before the legitimate beneficiary had even seen the link. Scanners don't click buttons, so the gate eliminates the false positives without slowing the real beneficiary down by more than one click. It also gives the beneficiary an explicit consent moment — they're told what's about to happen, and they choose to open it.

Defense 5: A receipt to the beneficiary too — especially critical for deadman shares. When a share is opened, the beneficiary now receives their own confirmation email at the same time the owner gets the alert. Time, IP, approximate location. Copy adapts to the trigger type: for immediate shares it mentions that the sender was also notified; for deadman-triggered shares it acknowledges that the original owner may not be reachable and positions the beneficiary's record as the canonical access trail. This matters most in the deadman case — the owner is gone, so the standard owner-alert defense goes to a void. The beneficiary IS the watcher. They need the same real-time visibility into access events that the owner has during life.

A note on the server's view of the token. The server sees the plaintext token only at the precise moment the beneficiary clicks the link. The validation step hashes it in memory using SHA-256 and compares against the stored hash. Plaintext tokens are never persisted server-side. The stored state for any share is (encrypted_key, iv, sha256(token)) — useless without the token, and the token is not stored. A database breach without a corresponding live-session compromise gets the attacker nothing decryptable.

Why this design at all. A beneficiary opening a share is, by construction, somebody who does not have an account with us. They might be a grieving spouse opening their inbox in the worst week of their life. The single most important property of the share-access flow is that it works without requiring them to install anything, register anything, or remember anything. Token-in-URL plus the five defenses above is a deliberate tradeoff: we accept a real but mitigable exposure surface in exchange for an experience that doesn't fail the people it was built for. The click-through gate plus dual notifications converts what would otherwise be a silent compromise risk into a noisy one — and noisy compromises are the kind defenders win. We continue to weigh stronger token-delivery mechanisms (URL fragments would remove the server-side exposure entirely), but only on the condition that they preserve the click-and-it-works property for grieving spouses opening links from a phone.

The Trigger Pipeline

The deadman switch itself is, surprisingly, not a cryptography problem. It's a state machine problem.

Each switch has:

  • A check-in cadence configured by the owner.
  • A grace period (extra time after the deadline before triggering).
  • Reminders sent via email, SMS, and push as the deadline approaches.
  • A list of associated items (files, notes, notebooks) and the beneficiaries assigned to each.

A scheduled Oban worker iterates over switches, checks deadlines, schedules reminders, and — if the user fails to check in past the grace period — marks the switch as triggered and enqueues a delivery worker. The delivery worker emails the share URLs to beneficiaries and logs the access events as they're claimed.

The critical property of this pipeline is that it does not check subscription status. From the worker's source: "Subscription state is intentionally NOT checked. Once a switch is set up, it fires regardless of whether the owner's subscription is active." We made this deliberate after a long debate. A switch that stops working when a payment fails is not a deadman switch — it's a feature that comes with a billing contingency. Once a switch is configured, it fires when its conditions are met, regardless of whether the user is paid up.

What the Server Cannot Do

The list of things the architecture makes impossible:

  • Read the contents of any uploaded file.
  • Recover a user's password.
  • Decrypt a share before the user has handed out a token (and before the switch has triggered, in the deadman-switch case).
  • Generate a "support override" to access a user's vault.
  • Comply with a subpoena requesting plaintext of stored files.
  • Run server-side scanning of any kind on file contents.
  • Index file contents for search.
  • Generate previews of files server-side.

That last bullet is a real product cost. We can't show you a thumbnail of your PDF in the dashboard server-side. We can't search inside your documents. Some users find this frustrating. We find it reassuring — the absence of those features is evidence that the encryption is real. Previews and decryption do happen, but only in the user's browser, after they've unlocked their master key locally.

The Recovery Tradeoff

The only thing that breaks this whole design is the user's own forgetfulness. If they lose their password and lose their recovery codes, the master key is unrecoverable. The files exist on our servers as encrypted blobs that nobody — including us — can read. They are gone in the sense that matters.

We mitigate this with:

  • Recovery codes generated at signup.
  • Strong onboarding nudges to print and store them separately.
  • A self-recovery pattern (set up a deadman switch to yourself with a shorter cadence, so your codes deliver to you if you stop checking in).

But we don't offer the thing many users would prefer, which is "we'll reset your password if you can prove you're you." That feature would require us to hold a recoverable copy of your key material somewhere. The moment we do, the architecture stops being zero-knowledge. We've concluded the tradeoff is correct, and we'd rather build mitigations on top of strong cryptography than weaken the cryptography to soften the mitigations.

Why This Architecture Specifically

We didn't invent any of this. The wrapped master key pattern is the same one 1Password, Bitwarden, ProtonMail, and several other security-critical products use. We chose it because:

  1. It's been studied. Every weakness anyone has found has been published, debated, and either fixed or accepted in public.
  2. It scales. Password changes don't trigger re-encryption. New devices don't require recutting keys. New shares don't require touching old files.
  3. It composes. Per-file keys, per-share tokens, recovery codes, and the deadman-switch trigger pipeline all bolt on without changing the core invariant.
  4. It is honest about its tradeoffs. The thing that's impossible (recovery without the password) is impossible by design, and we tell users that on day one.

The cryptography is not interesting. That's the point. Boring cryptography, used carefully, is what holds up under attack. Clever new schemes get embarrassing CVEs. The boringness is a feature.


Killswitch is the product this architecture runs in production. If you want to try it: killswitch.app