Hear hear! You don’t own a backup if you’ve never restored it before. Words to live by both in corporate and self-hosting environments.
IT professional with a strong love for all things #FLOSS. Soon-to-be-retired #soccer player, #guitar player and sizeable #LEGO bricks addict.
https://keyoxide.org/26E947141F348287FF494EAE736EDD9A0151287B
Pixelfed: @pete@pixel.cyano.at
PeerTube: @pete@tube.cyano.at
Hear hear! You don’t own a backup if you’ve never restored it before. Words to live by both in corporate and self-hosting environments.
Ironically, if I would have had more services running in docker I might not have experienced such a fundamental outage. Since docker services usually tend to spin up their exclusive database engine you kind of “roll the dice” as far as data corruption goes with each docker service individually. Thing is, I don’t really believe in bleeding CPU computation cycles by running redundant database services. And since many of my services are already very long-serving they’ve been set up from source and all funneled towards a single, central and busy database server - thus, if that one experiences sudden outage (for instance power failure) all kinds of corruption and despair can arise. ;-)
Guess I should really look into a small UPS and automated shutdown. On top of better backup management of course! Always the backups.
You’re quite bold - I like it ;-) in all honesty, is your requirement mounting an NFS share? As indicated by @chris it really is designed for the local network.
How about using something more suited like a WebDAV share/mount?
at least weekly mysqlcheck + mysqlddump and some form of periodic off-machine storing of that is something I’ll surely take to heart after this lil’ fiasco ;-) sound advice, thank you!