Just this guy, you know?

  • 0 Posts
  • 72 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle




  • zaphod@lemmy.catoLemmy Shitpost@lemmy.worldPlease Stop
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    8 months ago

    You didn’t actually read the page you linked to, did you?

    Let’s just jump to the conclusion:

    This author believes it is technologically indefensible to call Fossil a “blockchain” in any sense likely to be understood by a majority of those you’re communicating with. Using a term in a nonstandard way just because you can defend it means you’ve failed any goal that requires clear communication. The people you’re communicating your ideas to must have the same concept of the terms you use.

    (Emphasis mine)

    Hint: a blockchain is always a Merkel tree, but a Merkel tree is not always a blockchain.


  • zaphod@lemmy.catoLemmy Shitpost@lemmy.worldPlease Stop
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    8
    ·
    edit-2
    8 months ago

    the technology itself has its use cases.

    Cool.

    Name one successful example.

    I mean, it’s been, what, 15 years of hype? Surely there must be a successful deployment of a commercially viable and useful blockchain that isn’t just a speculative cryptocurrency or derivative thereof, right?

    Right?








  • zaphod@lemmy.catoProgrammer Humor@lemmy.mlThis is painfully true
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    9 months ago

    There are more beginners then there are experts, so in the absence of research a beginner UI is a safer bet.

    If you’re in the business of creating high quality UX, and you’re building a UI without even the most basic research–understanding your target user–you’ve already failed.

    And yes, if you definite “beginner” to be someone with expert training and experience, then yes an expert UI would be better for that “beginner”. What a strange way to define “beginner” though.

    If I’m building a product that’s targeting software developers, a “beginner” has a very different definition than if I’m targeting grade school children, and the UX considerations will be vastly different.

    This is, like, first principles of product development stuff, here.


  • zaphod@lemmy.catoProgrammer Humor@lemmy.mlThis is painfully true
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    9 months ago

    Unless you’ve actually done the user research, you have no idea if a “beginner friendly UX is a safer bet” . It’s just a guess. Sometimes it’s a good guess. Sometimes it’s not. The correct answer is always “it depends”.

    Hell, whether or not a form full of fields is or isn’t “beginner” friendly is even debatable given the world “beginner” is context-specific. Without knowing who that user is, their background, their training, and the work context, you have no way of knowing for sure. You just have a bunch of assumptions you’re making.

    As for the rest, human data entry that cannot be automated is incredibly common, regardless of your personal feelings about it. If you’ve walked into a government office, healthcare setting, legal setting, etc, and had someone ask you a bunch of questions, you might be surprised to hear that the odds are very good that human was punching your answers into a computer.



  • zaphod@lemmy.catoProgrammer Humor@lemmy.mlThis is painfully true
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    9 months ago

    That third screenshot, assuming good keyboard navigation, would likely be a godsend for anyone actually using it every day for regular data entry (well, okay, not without fixes–e.g. the SSN and telephone number split apart as separate text boxes is terrible).

    This same mindset is what led Tesla to replace all their driver friendly indicators and controls with a giant shiny touchscreen that is an unmitigated disaster for actual usability.


  • Hah I… think we’re on the same side?

    The original comment was justifying unregulated and unmitigated research into AI on the premise that it’s so dangerous that we can’t allow adversaries to have the tech unless we have it too.

    My claim is AI is not so existentially risky that holding back its development in our part of the world will somehow put us at risk if an adversarial nation charges ahead.

    So no, it’s not harmless, but it’s also not “shit this is basically like nukes” harmful either. It’s just the usual, shitty SV kind of harmful: it will eliminate jobs, increase wealth inequality, destroy the livelihoods of artists, and make the internet a generally worse place to be. And it’s more important for us to mitigate those harms, now, than to worry about some future nation state threat that I don’t believe actually exists.

    (It’ll also have lots of positive impact as well, but that’s not what we’re talking about here)




  • You don’t need AI for any of that. Determined state actors have been fabricating information and propagandizing the public, mechanical Turk style, for a long long time now. When you can recruit thousands of people as cheap labour to make shit up online, you don’t need an LLM.

    So no, I don’t believe AI represents a new or unique risk at the hands of state actors, and therefore no, I’m not so worried about these technologies landing in the hands of adversaries that I think we should abandon our values or beliefs Just In Case. We’ve had enough of that already, thank you very much.

    And that’s ignoring the fact that an adversarial state actor having access to advanced LLMs isn’t somehow negated or offset by us having them, too. There’s no MAD for generative AI.