• 0 Posts
  • 12 Comments
Joined 6 months ago
cake
Cake day: March 28th, 2024

help-circle
  • Anything exposed to the internet will be found by the scanners. Moving ssh off of port 22 doesn’t do anything except make it less convenient for you to use. The scanners will find it, and when they do, they will try to log in.

    (It’s actually pretty easy to write a little script to listen on port 20 (telnet) and collect the default login creds that the worms so kindly share)

    The thing that protects you is strong authentication. Turn off password auth entirely, and generate a long keypair. Disable root login entirely.

    Most self-hosted software is built by hobbyists with some goal, and rock solid authentication is generally not that goal. You should, if you can, put most things behind some reverse-proxy with a strong auth layer, like Teleport.

    You will get lots of advice to hide things behind a vpn. A vpn provides centralized strong authentication. It’s a good idea, but decreases accessibility (which is part of security) - so there’s a value judgement here between the strength of a vpn and your accessibility goals.

    Some of my services (ssh, wg, nginx) are open to the internet. Some are behind a reverse proxy. Some require a vpn connection, even within my own house. It depends on who it’s for - just me, technical friends, the world, or my technically-challenged parents trying to type something with a roku remote.

    After strong auth, you want to think about software vulnerabilities - and you don’t have to think much, because there’s only one answer: keep your stuff up to date.

    All of the above covers the P in PICERL (pick-uh-rel) for Prepare. I stands for Identify, and this is tricky. In an ideal world, you get a real-time notification (on your phone if possible) when any of these things happen:

    • Any successful ssh login
    • Any successful root login
    • If a port starts listening that you didn’t expect
    • If the system watching for these things goes down (have two systems that watch each other)

    That list could be much longer, but that’s a good start.

    After Identification, there’s Contain + Eradicate. In a homelab context, that’s probably a fresh re-install of the OS. Attacker persistence mechanisms are insane - once they’re in, they’re in. Reformat the disk.

    R is for recover or remediate depending on who you ask. If you reformatted your disks, it stands for “rebuild”. Combine this with L (lessons learned) to rebuild differently than before.

    To close out this essay though, I want to reiterate Strong Auth. If you’ve got strong auth and keep things up to date, a breach should never happen. A lot of people work very hard every day to keep the strong auth strong ;)


  • There is no such thing as easy or hard.

    Give it a try, fuck it up, and give it a try again. Try not to fuck it up in the same way as the first time. Repeat until it works - it will work eventually.

    It took me about 6 hours and 3 disk re-formats my first time. I was particularly bad at it. I barely knew what a disk was, nevermind a partition.

    Actually I’m still not sure what a partition is.

    You’ll do fine :)





  • I’ve been zipping things all day. Because it’s only one blob in the container, and then you can use website_run_from_package, which is just about the only way to get azure functions stood up via infra-as-code.

    But whatever unzip thing they use sure isn’t the linux default, because it doesn’t support symlinks. And pnpm uses almost exclusively symlinks, to point to its central package store, so re-installing doesn’t take 8 years like it does with npm.

    But that’s fine, because zip will follow symlinks and bake the actual files in, in place - which is pretty slick. But then azure functions package resolver can’t seem to figure out what the hell is going on, because it’s still putting dependencies in node_modules/.pnpm.

    So we pass —shamefully-hoist, which is a great name for a flag, which puts all the things at the top level of node_modules, and now zip works, and azure works - but each dependency also comes with its own node_modules, with another symlink to a package that’s already at the top level. So it works, but it’s 10x bigger than it needs to be - 6.4 MB instead of 668 KB.

    Fortunately we can use our build script to populate a .npmrc file, and set node-linker to hoisted, at which point pnpm will mimic npm with no symlinks at all - small, efficient, and dumb enough that the azure functions runtime can figure out how to deal with it.

    It took me 4 hours to debug this mess.

    All that to say, yes, a weighted blanket would be downright delightful right now, but please keep the zip files away from me



  • sandalbucket@lemmy.worldtoMemes@lemmy.mlWall to Wall.
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    5 months ago

    At least outlook can right click for spellcheck. Wait, actually it can’t do that.

    At least you can download your email attachments to a folder? Wait actually you can’t do that either.

    At least the 15-minute meeting warnings still pop up consistently? Oh. Oh no.

    “Are you sure you want to post this comment? Would you like to upload to sharepoint and send a link instead?”

    No outlook I would not like that, I would never like that


  • Is my file in onedrive? Or on disk? Or is it in sharepoint? Or it could be in a teams chat - but isn’t that just sharepoint? I sent it to Tom also, but it was already in sharepoint because I had sent it to Jim, so it re-named it to something else. Where in sharepoint are my teams files? Or the teams files others have sent me? Is this actually an attachment on my email or is it a “shared link” in disguise?

    I’m not sure what’s real anymore!