• 0 Posts
  • 91 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle



  • In my very limited experience with my 5400rpm SMR WD disk, it’s perfectly capable of writing at over 100 MB/s until its cache runs out, then it pretty much dies until it has time to properly write the data, rinse and repeat.

    40 MB/s sustained is weird (but maybe it’s just a different firmware? I think my disk was able to actually sustain 60 MB/s for a few hours when I limited the write speed, 40 could be a conservative setting that doesn’t even slowly fill the cache)



  • Then what’s the meaning of this whole part?

    On non-corpo linux syslog can be disabled if you want, though I’d prefer to just symlink/mount /var/log to a memory filesystem instead.

    Is it just a random tidbit that could be replaced with a blueberry muffin recipe without any change of meaning of the whole comment? Because it sure won’t help OP at all with their Arch-specific question, so it’s either that, or it provides contrast to the “corpo Linux”, which is how I interpreted it.

    And here’s the remaining part of your comment I left out, just to make sure people won’t lose the context between two three sentence long comments (for those without any attention span, it comes before the previous quoted part):

    If you’re on arch you use redhat’s garbage.





  • How is it open source?

    How is it not? Open source doesn’t mean you have to accept other people’s code. And it is perfectly valid to only dump code for every release, even some GNU projects (like GCC) used to work that way. Hell, there’s even a book about the two different approaches in open source.

    So whatever benefit you were hoping to get from Nvidia’s kernel modules being open source probably is not there.

    It allowed the actual in-tree nouveau kernel module to take the code for interacting with the GSP firmware to allow changing GPU clock speed - in other words no more being stuck on the lowest possible frequency like with the GTX 10 series cards. Seems like a pretty decent benefit to me.


  • Vista’s problem was just the terrible third party drivers and the fact that it was preinstalled on machines it had no business running on. 7 didn’t improve much on it (except fixing the UAC prompt so that it no longer made you feel like you’re using Linux with misconfigured sudo timeout), but it had the benefit of already having working drivers from Vista and proper hardware capable of running Vista/7.


  • Zig didn’t come to my mind when I was writing my comment and I agree that it’s probably a decent option (the only issue I can think of is its somewhat small community, but that’s not a technical issue with the language).

    My argument against Go and Java is garbage collection - even if Java’s infamous GC pause can apparently be worked around with a specialized JVM, I’m pretty sure it still comes at the cost of higher memory usage and wasted CPU cycles compared to some kind of reference counting or Rust’s ownership mechanism (not sure about the proper term for that). And higher memory usage is definitely not something I want to see in my browser, they’re hungry enough as is.




  • Probably a bit of a TL:DR of the other answer, but the short answer is: the execute bit has a different meaning for directories - it allows you to keep going down the filesystem tree (open a file or another directory in the directory). The read bit only allows you to see the names of the files in the directory (and maybe some other metadata), but you cannot open them without x bit.

    Fun fact, it makes sense to have a directory with --x or -wx permissions - you can access the files inside if you already know their names.

    Edit: not a short answer, apparently


  • You can now turn on the “autoscrolling” feature of the Libinput driver, which lets you scroll on any scrollable view by holding down the middle button of your mouse and moving the whole mouse

    Am I crazy, or did this used to be a feature? And not just in Firefox

    It’s a Windows feature that never really made it to Linux. I used to miss it but honestly, middle click paste feels way more useful to me now




  • def generate_proof_of_work_key(initial_key, time_seconds):
        proof_key = initial_key
        end_time = time.time() + time_seconds
        iterations = 0
        while time.time() < end_time:
            proof_key = scrypt(proof_key, salt=b'', N=SCRYPT_N, r=SCRYPT_R, p=SCRYPT_P, key_len=SCRYPT_KEY_LEN)
            iterations += 1
        print(f"Proof-of-work iterations (save this): {iterations}")
        return proof_key
    
    
    def generate_proof_of_work_key_decrypt(initial_key, iterations):
        proof_key = initial_key
        for _ in range(iterations):
            proof_key = scrypt(proof_key, salt=b'', N=SCRYPT_N, r=SCRYPT_R, p=SCRYPT_P, key_len=SCRYPT_KEY_LEN)
        return proof_key
    

    The first function is used during the encryption process, and the while loop clearly runs until the specified time duration has elapsed. So encryption would take 5 days no matter how fast your computer is, and to decrypt it, you’d have to do the same number of iterations your computer managed to do in that time. So if you do the decryption on the same computer, you should get a similar time, but if you use a different computer that is faster at doing these operations, it will decrypt it faster.


  • It’s a very short Python script and I’m confident I get the general idea - there’s absolutely nothing related to current time in the decryption process. What they refer to as a “time lock” is just encrypting the key in a loop (so the encrypted key from one loop becomes the plain text for the next one) for the specified duration and then telling you how many iterations were done. That number then becomes a second part of the password - to decrypt, you simply provide the password and the number of iterations, nothing else matters.