• 0 Posts
  • 87 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • The diver probably has some food on him, which the stingray is trying to get.
    I visited Stingray City in Grand Cayman a lot of years back. Part of the tour package was that they gave you small squid to feed to the stingrays, and they would climb up you, out of the water for that snack. Also, there were a lot of stingrays in the area. We were instructed to shuffle our feet as we walked, to avoid stepping on one. The swimmer in the picture only needed to hang out for a bit before one or more stingrays would have come over, looking for any handouts.

    That said, the experience of Stingray City was absolutely worth it. Between that, and snorkeling at the barrier reef, I have a lot of fond memories of my time at Grand Cayman.






  • I’ve been using Proxmox professionally for years now, and not once did i have s problem i could not fix myself.

    And how many of the environments you have left behind became an unmanageable mess when the company couldn’t hire someone with your skillset? One of the downsides to this sort of DIY infrastructure is that it creates a major dependency on a specific skillset. That isn’t always bad, but it does create a risk which business continuity planning must take into account. This is why things like OpenShift or even VMWare tend to exist (and be expensive). If your wunderkind admin leaves for greener pastures, your infrastructure isn’t at risk if you cannot hire another one. The major, paid for, options tend to have support you can reach out to and you are more likely to find admins who can maintain them. It sucks, because it means that the big products stay big, because they are big. But, the reality of a business is that continuity in the face of staff turnover is worth the licensing costs.

    This line, from the OP’s post, is kind of telling as to why many businesses choose not to run Proxmox in production:

    It is just KVM libvirt/qemu and corosync along with some other stuff like ZFS.

    Sure, none of those technologies are magic; but, when one of them decides to fuck off for the day, if your admin isn’t really knowledgeable about all of them and how they interact, the business is looking at serious downtime. Hell, my current employer is facing this right now with a Graylog infrastructure. Someone set it up, and it worked quite well, a lot of years ago. That person left the company and no one else had the knowledge, skills or time to maintain it. Now that my team (Security) is actually asking questions about the logs its supposed to provide, we realize that the neglect is causing problems and no one knows what to do with it. Our solution? Ya, we’re moving all of that logging into Splunk. And boy howdy is that going to cost a lot. But, it means that we actually have the logs we need, when we need them (Security tends to be pissy about that sort of thing). And we’re not reliant on always having someone with Graylog knowledge. Sure, we always need someone with Splunk knowledge. But, that’s a much easier ask. Splunk admins are much more common and probably cheaper. We’re also a large enough customer that we have a dedicated rep from Splunk whom we can email with a “halp, it fell over and we can’t get it up” and have Splunk engineers on the line in short order. That alone is worth the cost.

    It’s not that I don’t think that Proxmox or Open Source Software (OSS) has a place in an enterprise environment. One of my current projects is all about Linux on the desktop (IT is so not getting the test laptop back. It’s mine now, this is what I’m going to use for work.). But, using OSS often carries special risks which the business needs to take into account. And when faced with those risks, the RoI may just not be there for using OSS. Because, when the numbers get run, having software which can be maintained by those Windows admins who are “used to click their way though things” might just be cheaper in the long run.

    So ya, I agree with the OP. Proxmox is a cool option. And for some businesses, it will make financial sense to take on the risks of running a special snowflake infrastructure for VMs. But, for a lot of businesses, the risks of being very reliant on that one person who “not once [had a] problem i could not fix myself”, just isn’t going to be worth taking.


  • I’m glad to see them trying and I really do want to see competition in the digital game storefront space. However, I have zero trust in EA to not try and fuck me as a customer at some point. So ya, no matter how good of a fee structure they offer devs, they will continue to lack the one thing devs actually care about: customers.

    Also, as a Linux gamer, it’s really tough to consider a store front which doesn’t offer a Linux client. Sure, I might be able to get their app running in Wine. But, at that point, maybe I should just go support the company which is supporting me.


  • What Im observing though is more and more indies filling the void with smaller and cheaper games due to easy access to digital distribution. Not exactly a new take as its been hapening for over 15 years now. Interestingly, Epic seems to not take the same stance as Steam does in this space. Where steam gives pretty much any shovelware the same chances, Epic wants to be super picky about these low budget titles. Where is Epic’s Balatro?

    This reminds me a lot of the days of the original PlayStation (PS). Nintendo was the large, dominant company. But, they were also really, really picky with the games they let on their platform (still are). Along comes Sony with a better physical format and a willingness to let just about anything on their system. And there were a lot of terrible titles on the PS; but, there were also some real gems from smaller devs and lots more choice for people to find what they wanted to play. That openness and plethora of options drew people to the system. Sure, Nintendo is still around and still a juggernaut, but they gave up a lot of market space to Sony.

    Sweeney and many of the big studios seem dead set on trying to replicate lightning. They keep churning out Fortnight clones, live service games and lootbox infested grind fests. None of this is because they want to make a game for players, it’s all a bald-faced money grab. And it comes across so clearly in their games. Yes, big budget games cost a lot of money and I don’t begrudge studios trying to make money. I’m more than happy to throw money at devs who make a great game (I just pledged ~$250 at the Valheim Board Game project, based mostly on the fact that I fucking love Valheim). I’ve also bought into way too many Early Access games, because they looked like they had the bones of good games. But, the big budget games seem to get lost trying to pump every last dollar out of your wallet and just quickly become a turn off.

    I remember one particular instance in Dragon Age, where an NPC had a “Quest Available” marker floating above his head. When you talked to him, you quickly discovered that you could buy his quest and the game was happy to kick you over to the EA store so that you could buy his quest right there. Fuck that noise. I’m not against DLC, but that sort of “in your face” advertising pisses me right off. Hell, I’m one of those weirdos who likes the Far Cry series. I put tons of hours into Far Cry 5 (seriously, the wing suit was just good fun). Far Cry 6 was ok and I did finish it, though the micro-transaction spam grated on me hard. After that experience, I’m not sure I want a Far Cry 7.

    And I think that points to the elephant in the room. Big publishers, like EA are so focused on making profits, they have lost sight of making a good game. Give me a solid, complete experience. Give me good controls, enough story to hold the action together and just a general sense of fun. Once that is in place, then maybe throw hats for sale on top of that. But, when lootboxes and micro-transactions are core to the gameplay and the game is balanced to force you in the direction of buying that crap, fuck your game. If the core gameplay is designed to suck so much that I want to buy cheats to bypass that core gameplay, I’ll save myself a bunch of money and just skip the game entirely. There are way too many options available out there, which don’t suck, for me to waste my time and money shoveling your shit.



  • While I hate the idea of people losing their jobs, stepping back for a moment and looking at what they are claiming, its not terribly surprising:

    Spencer said the roles affect mostly corporate and support functions

    When companies merge, this is kinda needed. You don’t need two fully functional HR departments. While the HR staff from the buying company will likely need to expand, it won’t be by the same amount as the HR department of the company being bought. As network functions are merged, you probably don’t need all of the IT staff which came with the merger. A lot of management functions likely end up merged, meaning redundancies. And this sort of thing is going to move through a lot of the non-project work functions of the company.

    Yes it sucks. But, it’s to be expected in a merger. Now, whether or not we want this level of consolidation, that’s a different ball of wax entirely. The last thing we need is more studios falling under the sway of these massive companies. That’s the thing which should be drawing our ire.




  • There may also be a (very weak) reason around bounds checking and avoiding buffer overflows. By rejecting anything longer that 20 characters, the developer can be sure that there will be nothing longer sent to the back end code. While they should still be doing bounds checking in the rest of the code, if the team making the UI is not the same as the team making the back end code, the UI team may see it as a reasonable restriction to prevent a screw up, further down the stack, from being exploited. Again, it’s a very weak argument, but I can see such an argument being made in a large organization with lots of teams who don’t talk to each other. Or worse yet, different contractors standing up the front end and back end.



  • Have you considered just beige boxing a server yourself? My home server is a mini-ITX board from Asus running a Core i5, 32GB of RAM and a stack of SATA HDDs all stuffed in a smaller case. Nothing fancy, just hardware picked to fulfill my needs.

    Limiting yourself to bespoke systems means limiting yourself to what someone else wanted to build. The main downside to building it yourself is ensuring hardware comparability with the OS/software you want to run. If you are willing to take that on, you can tailor your server to just what you want.






  • No, but you are the target of bots scanning for known exploits. The time between an exploit being announced and threat actors adding it to commodity bot kits is incredibly short these days. I work in Incident Response and seeing wp-content in the URL of an attack is nearly a daily occurrence. Sure, for whatever random software you have running on your normal PC, it’s probably less of an issue. Once you open a system up to the internet and constant scanning and attack by commodity malware, falling out of date quickly opens your system to exploit.