The startup is absolutely more stressful for the motor. It’s a period of high current that also creates hotspots in the windings and such. It’s certainly not great for the motor.
The startup is absolutely more stressful for the motor. It’s a period of high current that also creates hotspots in the windings and such. It’s certainly not great for the motor.
Regular Fedora is more than stable enough for day to day use. I’d start there and then with use see if it’s a good fit.
I may understand “opinionated” differently from you, but the main issue is that when you do want to change something, you can’t. Or it’s some unsupported hack, or (best case) you flip some hidden configuration variable (that will probably break with the next release).
KDE is well configured from the get go as well, you don’t have to change anything and it will work well. But if you do decide that you don’t like some of their defaults, you can tweak many aspects of it.
It wouldn’t really be an issue if you didn’t need an extension for every single basic functionality…
Because of how stupidly opinionated Gnome is I switched to KDE a year or so ago and have been extremely happy with it. And what do you know I don’t even need any extensions, because sane stuff like tray icons are builtin.
I do use an extension for distributing windows in custom areas though, and it didn’t even break throughout the (I believe) 2 large updates there were since I started using it.
That’s technically true, but the apps “everyone” has are the opposite to that, and people are used to it and don’t really seem to complain. So if Facebook, Tiktok, Twitter, Amazon, Spotify and Aliexpress each do their own (garbage) thing, it shows other brands they can do that too, and they kinda ruin it for everyone. Basically the apps you spend most time in are probably like that, and it’s a shitty experience.
…to be fair browsers don’t really make sense for streaming, but you could call it “future proofing”.
I don’t think dual boot has ever been a good solution (unless you also run one or both of the OS’s under the other in a VM).
Like, if you are unsure about linux, trying it out, learning, whatever, you can just boot a live"cd", or maybe install it on an external (flash) drive.
If you are kinda sure you want to switch, just nuke Windows; it’s easier to switch that way than to have everything on two systems, having to switch.
This means that it is impossible for them to make a patch or PR because it would conflict with the projects licence and fact its open source.
That’s not how it works. It just means the company owns the code for all intents and purposes, which also means that if they tell you that you can release it under a FOSS license / contribute to someone else’s project, you can absolutely do that (they effectively grant you the license to use “their” code that you wrote under a FOSS license somewhere else).
That’s never going to happen, and the reasons are twofold:
Brands want to push their own style on people, to make themselves recognizable, and to push their ideas about UX to their users (because they obviously know better than the OS/DE/compositor/whatever people).
It’s easier and cheaper to build a web app, because there are so many web developers. It also usually allows you to give an “app” to people who want that, while giving a (perhaps somewhat limited) browser version to everyone else, reaching the maximum amount of users while maintaining only a single codebase and keeping everything more or less cohesive and looking the same.
AGPL, to prevent streaming (while not sharing the code).
That sounds very illegal, yeah. You can’t advertise a price and then charge something different. It doesn’t matter that the person didn’t notice it. At that point you might not have price tags at all (which is also illegal, just FYI).
That’s only true in theory, and if you are actually capable of doing that.
The reality is that most software was already barely working when it was written, it’s poorly documented and if you try to work on it without any help you might as well write it on your own from scratch.
You will also encounter incompatibilities, missing dependencies, etc.
Don’t get me wrong, I love FOSS, I know all the advantages and it’s definitely better than the alternative. But it’s also not a silver bullet. Though this case is pretty cut and dry.
…as opposed to open source software, which will be maintained and updated forever, and there will always be people to work on it for free. /s
It generates code and then you can use a call to some runtime execution API to run that code, completely separate from the neural network.
Yes, that’s one option. Then you only have to distribute the certificates and keys.
Or you allow remote access to that DNS server (Bind has a secure protocol for this), do the challenge requests and cert generation on some other machine. Depends on what is more convenient for you (the latter is better if you have lots of machines/certs).
Worst case if someone compromises that DNS server they can only generate certificates but not change your actual valuable records because these are not delegated there.
Life isn’t a zero sum game where you have to optimize material wealth. Some people do things for others just because they like doing it, because they have the means to do so, or because they simply want to help others.
Sure, there are costs involved, but that’s true for literally everything if you account for opportunity cost. The vast majority of people choose to waste time completely unproductively, with no objective benefits to their lives (often with objective disadvantages), so is it hard to imagine that some people aren’t like that and instead choose to help/provide for others whole perhaps having some other non-material benefits like learning something or just becoming liked within a community?
What you can (and absolutely should) do is DNS delegation. On your main domain you delegate the _acme-challenge.
subdomains with NS records to your DNS server that will do cert generation (and cert generation only). You probably want to run Bind there (since it has decent and fast remote access for changing records and other existing solutions). You can still split it with separate keys into different zones (I would suggest one key per certificate, and splitting certificates by where/how they will be used).
You don’t even need to allow remote access beyond the DNS responses if you don’t want to, and that server doesn’t have anything to do with anything else in your infrastructure.
Have been for a long time. You just have to use the DNS validation. But you should do that (and it’s easy) if you want to manage “internal” domains anyway.
…which shouldn’t be an issue in any way. For extra obscurity (and convenience) you can use wildcard certs, too.
…and that’s how it still works.