What about .. you mounted your system disk and an external drive on another system and now you put it back in your laptop it won’t boot because some pool thing got changed.
Remote send/recv can be frustrating due to the need for root.
It can be helpful to remember that while less efficient, for scenarios where ssh as root is a no-go you can ship snapshot syncs (including incremental ones) as files:
zfs send [...] tank > tank.zfssnap
cat tank.zfssnap | zfs recv [...] bank
>It can be helpful to remember that while less efficient, for scenarios where ssh as root is a no-go you can ship snapshot syncs (including incremental ones) as files:
This capability can also be extremely helpful for bootstrapping a big pool sync from a site with mediocre WAN (very common in many places, particularly when it comes to upload vs download). Plenty of individuals or orgs may be characterized by having quite a sizable amount of data accumulated by this point, but they're not generating new data at a prodigious clip. So if you can get the initial replication done, ongoing syncing from there can be possible over a fairly narrow pipe.
Latency might not be quite the best, but sometimes the greatest bandwidth to be had is a big fat drive or set of them in the trunk of a car :)
You can add a metadata device to the pool at any time - it's probably faster to add that pool and wait for it to populate than to delete all that data with metadata on spinning rust.
ZFS can hit “No space left on device” (ENOSPC) errors if the pool fills up. But unlike btrfs’s infamous ENOSPC woes, ZFS was designed to handle these situations much more gracefully. ZFS actually keeps a bit of “slop space” reserved. So, as you approach full, it stops writes early and gives you a chance to clean things up, instead of running into unpredictable issues or impossible snapshot removals like btrfs sometimes does. You can even tweak how much safety space ZFS reserves, though most users don’t need to touch it.
When you run out of space in ZFS, you get a clear error for write attempts, but the system doesn’t end up fragmented beyond repair or force you into tricky multi-step recovery processes. Freeing up space (by deleting files or snapshots, or expanding the pool) typically makes things happy again.
Also something like sanoid needs to be built in to ZFS. Find out all newer snapshots at source by zfs list and send the first and last.
Nice website btw. Does anyone know what tools are used to build this website?