If it was me I would do:
-
/dev/md0 (/boot on SSD RAID1) mounted to /boot
-
/dev/md1 (LVM PV on SSD RAID1)
-
/dev/md2 (LVM PV on HDD RAID1)
-
vgssd mapped to md1
-
swap lv on vgssd mapped as swap
-
root lv on vgssd mounted to / *var lv on vgssd
-
varaudit lv on vgssd mounted to /var/audit
-
home lv on vgssd mounted to /home
For these you can make the volumes smaller than the SSD and grow them on the fly pretty easily later on as you need more space in home root or var.
- vghdd mapped to md2
- storage lv on vghdd mounted to /storage
You could format all the data logical volumes btrfs2 if you’d like and encrypt with LUKS! That’s the Synology recommended config, I’ve also used xfs with my important volumes with no issues.
Because familiarity and stability? Sounds like the butter RAID1 wouldn’t be a bad option either though:
https://unix.stackexchange.com/questions/480391/btrfs-raid-1-vs-mdadm-raid-1#:~:text=BTRFS has better data safety,parallelizing writes and striping reads.