• 4 Posts
  • 300 Comments
Joined 4 years ago
cake
Cake day: January 21st, 2021

help-circle
  • require a separate device that looks like a calculator to use online banking

    To be fair this actually provides a very high level of security? At least in my experience with AIB (in Ireland) you needed to enter the amount of the transactions and some other core details (maybe part of the recipient’s account number? can’t quite recall). Then you entered your PIN. This signed the transaction which provides very strong verification that you (via the PIN) authorize the specific transaction via a trusted device that is very unlikely to be compromised (unless you give someone physical access to it).

    It is obviously quite inconvenient. But provides a huge level of security. Unlike this Safety Net crap which is currently quite easy to bypass.


  • which is supposed to enforce to run apps in secured phones

    The point of the Google Play Integrity API is to ensure that the user is not in control of their phone, but that one of a small number of megacorps are in control.

    Can the user pull their data out of apps? Not acceptable. Can the user access the app file itself? Not acceptable. Can the user modify apps? Not acceptable.

    Basically it ensures that the user has no control over their own computing.




  • Just to be clear it is probably a good thing that YouTube re-encodes all videos. Videos are a highly complex format and decoders are prone to security vulnerabilities. By transcoding everything (in a controlled sandbox) YouTube takes most of this risk on and makes it highly unlikely that the resulting video that they serve to the general public is able to exploit any bugs in decoders.

    Plus YouTube serves videos in a variety of formats and resolutions (and now different bitrates within a resolution). So even if they did try to preserve the original encoding where possible you wouldn’t get it most of the time because there is a better match for your device.


  • From my experience it doesn’t matter if there is an “Enhanced Bitrate” option or not. My assumption is that around the time that they added this option they dropped the regular 1080p bitrate for all videos. However they likely didn’t eagerly re-encode old videos. So old videos still look OK for “1080p” but newer videos look trash whether or not the “1080p Enhanced Bitrate” option is available.



  • I’m pretty sure that YouTube has been compressing videos harder in general. This loosely correlates with their release of the “1080p Enhanced Bitrate” option. But even 4k videos seem to have gotten worse to my eyes.

    Watching a higher resolution is definitely a valid strategy. Optimal video compression is very complicated and while compressing at the native resolution is more efficient you can only go so far with less bits. Since the higher resolution versions have higher bitrates they just fundamentally have more data available and will give an overall better picture. If you are worried about possible fuzziness you can try using 4k rather than 1440p as it is a clean doubling of 1080p so you won’t lose any crisp edges.




  • To put it another way you want to be using all of your RAM and swap. It becomes a problem if you are frequently reading from Swap. (Writing isn’t usually as much of an issue as they may be proactive writes in case more memory needs to be filled up).

    Basically a perfect OS would use RAM + Swap such that the least disk reads need to be issued. This can mean swapping out some idle anonymous memory so that the space can be used as disk cache for some hotter data.

    In this screenshot the OS decided that it was better to swap out 3GiB of something to use that space for the disk cache (“Cached” ). It is likely right about this decision (but is not always).

    3 GiB does seem a bit high. But if you have lots of processes running that are using memory but are mostly idle it could definitely happen. For example in my case I often have lots of Language Servers running in my IDE, but many of them are for projects that I am not actively looking at so they are just waiting for something to happen. These often take lots of memory and it may make sense to swap these out until they are used again.



  • I switched to Immich recently and am very happy.

    1. Immich’s face detection is much better, very rarely fails. Especially for non-white faces. But even for white faces PhotoPrisim regularly needed me reviewing the unmatched faces. I also needed to really turn up the “what is a face” threshold because otherwise it would miss a ton of clear faces. (Then it only missed some, but also has tons of false positives). On the other hand Immich just works.
    2. Immich’s UI is much nicer overall. Lots of small affordances. For example the menu item to “view in timeline” is worth switching alone. Also good riddance to PhotoPrism’s persistent and buggy selection. Someone must have worked really hard on implementing this but it was really just a bad idea.
    3. Immich has an app with uploading, and it allows you to view local and uploaded photos in one interface which is a huge UX win. I couldn’t find a good Android app for uploading to photoprism. You could set up import delays and stuff but you would still regularly get partially uploaded files imported and have to clean it up manually.
    4. Immich’s search by content is much better. For example searching for “cat with red and yellow ball” was useless on PhotoPrism, but I found tons of the results I was looking for on Immich.

    The bad:

    1. There is currently a terrible jank in the Immich app which makes videos unusable and everything painful. Apparently this is due to some Album sync process running in the main thread. They are working on it. I can’t fathom how a few hundred albums causes this much lag but 🤷 There is also even worse lag on the location view page, but at least that is just one page.
    2. The Immich app has a lot less features than the website. But the website works very well on mobile so even just using the website (and the app for uploading) is better than PhotoPrism here. The fundamentals are good but it just needs more work.
    3. I liked PhotoPrism’s advanced filters. They were very limited but at least they were there.
    4. Not being able to sort search results by date is a huge usability issue. I often know roughly when the photo I want to find was taken and being able to order by date would be hugely helpful.
    5. You have to eagerly transcode all videos. There is no way to clean up old transcodes and re-transcode on the fly. To be fair the PhotoPrism story also wasn’t great because you had to wait for the full video to be transcoded before starting, leading to a huge delay for videos more than a few seconds, but at least I could save a few hundred gigs of disk space.

    Honestly a lot of stuff in PhotoPrism feels like one developer has a weird workflow and they optimized it for that. Most of them are counter to what I actually want to do (like automatic title and description generation, or the review stuff, or auto quality rating). Immich is very clearly inspired by Google Photos and takes a lot of things directly from it, but that matches my use case way better. (I was pretty happy with Google Photos until they started refusing to give access to the originals.)


  • Most Intel GPUs are great at transcoding. Reliable, widely supported and quite a bit of transcoding power for very little electrical power.

    I think the main thing I would check is what formats are supported. If the other GPU can support newer formats like AV1 it may be worth it (if you want to store your videos in these more efficient formats or you have clients who can consume these formats and will appreciate the reduced bandwidth).

    But overall I would say if you aren’t having any problems no need to bother. The onboard graphics are simple and efficient.



  • There are three parts to the whole push system.

    1. A push protocol. You get a URL and post a message to it. That message is E2EE and gets delivered to the application.
    2. A way to acquire that URL.
    3. A way to respond to those notifications.

    My point is that 1 is the core and already available across devices including over Google’s push notification system and making custom push servers is very easy. It would make sense to keep that interface, but provide alternatives to 2 and 3. This way browsers can use the JS API for 2 and 3, but other apps can use a different API. The push server and the app server can remain identical across browsers, apps and anything else. This provides compatibility with the currently reigning system, the ability to provide tiny shims for people who don’t want to self host and still maintains the option to fully self host as desired.



  • IMHO UnifiedPush is just a poor re-implementation of WebPush which is an open and distributed standard that supports (and in the browser requires, so support is universal) E2EE.

    UnifiedPush would be better as a framework for WebPush providers and a client API. But use the same protocol and backends as WebPush (as how to get a WebPush endpoint is defined as a JS API in browsers, would would need to be adapted).