I’m trying to plan a better backup solution for my home server. Right now I’m using Duplicati to back up my 3 external drives, but the backup is staying on-site and on the same kind of media as the original. So, what does your backup setup and workflow look like? Discs at a friend’s house? Cloud backup at a commercial provider? Magnetic tape in an underground bunker?

  • emerald@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 days ago

    “3! 2! 1!” Is just what I say when doing some potentially deleterious action after rsyncing a few key directories to a separate volume

  • empireOfLove2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    82
    ·
    3 days ago

    3 sticky notes telling me to “go get that incremental backup working”,
    2 separate external hard drives,
    1 month out of date

    • tiz@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      Same lol. Can’t be that catastrophic. Right? …. Right?

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      borgmatic is way too easy and hetzner storage box is way too cheap to have any excuses

  • brokenlcd@feddit.it
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    A usb stick and an old hard drive from 2009. The crackhead way of dealing with backups.

  • Object@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    3 days ago

    I dump my encrypted data to someone who probably practices 3-2-1 rule (which is Backblaze for me). I mean, these guys back up data for a living.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 days ago
    • Primary ZFS pool with automatic snapshots
      • Provides 3+ copies of the files via snapshots (3)
    • Secondary ZFS pool at a different location replicates the primary
      • Provides more copies of the files (3)
      • Provides second media (2)
      • Is off-site (1)

    Does this make sense?

    • CrazyLikeGollum@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      I don’t think this meets the definition of 3-2-1. Which isn’t a problem if it meets your requirements. Hell, I do something similar for my stuff. I have my primary NAS backed up to a secondary NAS. Both have BTRFS snapshots enabled, but the secondary has a longer retention period for snapshots. (One month vs one week). Then I have my secondary NAS mirrored to a NAS at my friends house for an offsite backup.

      This is more of a 4-1-1 format.

      But 3-2-1 is supposed to be:

      • Three total copies of the data. Snapshots don’t count here, but the live data does.

      • On two different types of media. I.e. one backup on HDD and another on optical media or tape.

      • With at least one backup stored off site.

      • tburkhol@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I’ve always understood 2 as 2 physically different media - i.e., copies in different folders or partitions of the same disk is not enough to protect against failure of that disk, but a copy on a different disk does. Ideally 2 physically different systems, so failure/fire in the primary system won’t corrupt/damage the backup.

        Used to be that HDDs were expensive and using them as backup media would have been economically crazy, so most systems evolved backup media to be slower and cheaper. The main thing is that having /home/user/critical, /home/user/critical-backup, and /home/user/critical-backup2 satisfies 3 copies, but not 2 media.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Hm I wonder why snapshots wouldn’t satisfy 3. Copies on the same disk like /file, /backup1/file, /backup2/file should satisfy 3. Why wouldn’t snapshots be equivalent if 3 doesn’t guard against filesystem or hardware failure? Just thinking and curious to see opinion.

        • CrazyLikeGollum@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 days ago

          If I’m reading your example right, I don’t think that would satisfy three either. Three copies of the data on the same filesystem or even the same system doesn’t satisfy the “three backups” rule. Because the only thing you’re really protecting against is maybe user error. I.e. accidental deletion or modification. You’re not protecting against filesystem corruption or system failure.

          For a (little bit hyperbolic) example, if you put the system that has your live data on it through a wood chipper, could you use one of the other copies to recover your critical data? If yes, it counts. If no, it doesn’t.

          Snapshots have the same issue, because at the root a snapshot is just an additional copy of the data. There’s additional automation, deduplication, and other features baked into the snapshot process but it’s basically just a fancy copy function.

          Edit: all of the above is also why the saying “RAID is not a backup” holds true.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            Right so I guess the question of 3 is whether it means 3 backups or 3 copies. If we take it literally - 3 copies, then it does protect from user error only. If 3 backups, it protects against hardware failure too.

            E: Seagate calls them copies and explicitly says the implementer can choose how the copies are distributed across the 2 media. The woodchipper scenario would be handled by the 2 media requirement.

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    My current plan once new migration is completed:

    Primary pool - 1x ZFS (couldn’t afford redundancy but no different to my RPI server). My goal is to get a few more drives and set up a RAIDZ1/2.

    Weekly backup of critical data (eg. nextcloud) from primary pool to a secondary pool. Goal here is to get a mirror but will only be one drive for now.

    Weekly upload of secondary pool to hetzner storage box via rsync.


    Current server

    1x backup to secondary drive (rpi) 1x backup to hetzner storage box via rsync

  • Eskuero@lemmy.fromshado.ws
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 days ago

    4-2-1-1 for me I guess 🫣 or 4-2-2?

    Two copies at home, synced daily, one of them in an external drive that I like to refer as the emergency grab and run copy lol

    One at a family member synced weekly and manually every time I visit.

    All of those three copies are always within a 10 kilometer radius in a valley overseen by a volcano so…

    One partial copy of the so-critical-would-cry-if-Iost data is synced every few days to a backblaze bucket.

  • potentiallynotfelix@lemmy.fish
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    DO NOT follow my lead, my backup solution is scuffed at best.

    3:

    I have:

    • RAID1 array w/ 2 drives
    • Photos on the device that took them
    • Photos on a random old hard drive pulled from an ancient apple mac.

    2:

    I’ve got a hard drive and flash memory?

    1:

    Don’t have this at all, the closest is that my phone is off-site half of the day.

  • SirMaple__@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    I use Proxmox Backup Server for my backups. Everything backups to 1 system at home. I then sync the data store to a little NAS I have at a family members house across town and also to a cheap storage VPS on the other side of the country. I also do a manual sync of the data store to a single external drive that I manually connect and disconnect.

    None of my data hoarding files are backed up as that would cost way too much. That could change if I ever find a killer deal on an LTO8 or better drive and tapes.

    I know that Hetzner has some decently priced Storage Boxes that you can mount using rclone and then backup to. Keep in mind that latency will be a factor so it could be slow.

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    1 backup on a local, Independence disk. 1 backup on a HDD connected to an OpenWRT router at the other end of the house 1 backup on my remote vps.

    Restic+backrest

    Sftp for remote endpoint

  • Lem453@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    All persistent storage from my dockers are in a folder. All I have to backup everything is backup this one folder along with my docker compose files (in git).

    Locally there are zfs snapshots (autosnapshot) and for remote I use borgmatic.

    Borg to :

    1. Local server
    2. Friends server
    3. Borgbase
      • m33@theprancingpony.in
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        @lka1988 @Lem453 Primarily a frontend tool designed to make your life easier, torsion.org/borgmatic , but I tend to avoid macros, frontend scripts, or even GUIs like this. They may obscure Borg-specific configuration details that, hypothetically, could one day hinder your restoration process.

      • Lem453@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Its an automation software for borg backup to run on a schedule and keep a certain number of backups while deleting old ones etc.

  • 0x0@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Atm main sys is a ZFS RAIDZ1 on 3 SSDs
    Weekly-ish backup onto 1TB external HDD.
    Sync encrypted important stuff to Cloud.
    Syncthing some stuff to smartphone.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    All storage is on a Ceph cluster with 2 or 3 disk/node replication. Files and databases are backed up using Velero and Barman to S3-compatible storage on the same cluster for versioning. Every night, those S3 buckets are synced and encrypted using rclone to a 10tb Hetzner Storage Box that keeps weekly snapshots.

    Config files in my git repo:

    https://codeberg.org/jlh/h5b/src/branch/main/argo/external_applications/velero-helm.yaml

    https://codeberg.org/jlh/h5b/src/branch/main/argo/custom_applications/bitwarden/database.yaml

    https://codeberg.org/jlh/h5b/src/branch/main/argo/custom_applications/backups

    https://codeberg.org/jlh/h5b/src/branch/main/argo/custom_applications/rook-ceph

    Bit more than 3 copies, but hdd storage is cheap. Majority of my storage is Jellyfin anyways, which doesn’t get backed up.

    I’m working on setting up some small nvme nodes for the ceph cluster, which will allow me to move my nextcloud from hdd storage into its own S3 bucket with 4+2 erasure coding (aka raid 6). That will make it much faster and also its cut raw storage usage from 4x to 1.5x usable capacity