If you manage infrastructure across more than one laptop, you've probably hit the wall with config sync: either you accept the inconvenience of manually copying ~/.servonaut/config.json around, or you trust a SaaS with a file that literally contains hostnames, IPs, and sometimes SSH options.

We didn't want to be that SaaS. So the newly-shipped config sync in Servonaut is built on a simple rule:

The backend must not be able to read your config, even if it wants to.

How it works

When you run servonaut config push, the CLI does the following:

  1. Generates a random 16-byte salt the first time, stored alongside the encrypted blob
  2. Derives a 256-bit key from your passphrase using PBKDF2 (100k iterations, SHA-256)
  3. Encrypts your config JSON with AES-256-GCM — an authenticated cipher, so tampering is detectable
  4. Sends the ciphertext, salt, IV, and GCM auth tag to the backend, along with a SHA-256 of the original plaintext (for dedup, not decryption)

The backend's storage schema has a column called encryption = 'aes-256-gcm' that flags these blobs as client-encrypted. When it sees that flag, it does not attempt to decrypt, does not verify the hash against the data (it can't — data is ciphertext), and simply returns the blob on pull exactly as it was stored.

What we give up

  • No server-side search. We can't offer "show me configs that reference host X" because we don't know. We think that's a good tradeoff.
  • Lost passphrase = lost config. There is no recovery. If you want a backup, export and encrypt to another medium.

What you keep

  • Server operator stealing the DB gets useless ciphertext
  • Backend bug dumping a random request gets ciphertext
  • Legal subpoena for "all data for user X" gets ciphertext
  • Your CI / agent running servonaut config pull still works with just a passphrase

Existing unencrypted snapshots (from before this change) continue to work — the CLI and the backend both handle both paths. New snapshots default to client-encrypted when you've set a passphrase.