• UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    2 months ago

    LLMs have been foundational to search engines going back to the 90s. Sam Altman is simply doing a clever job of marketing them as something new and magical

    • voracitude@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      2 months ago

      You’re thinking of Machine Learning and neural networks. The first “L” in LLM stands for “Large”; what’s new about these particular neural networks is the scale at which they operate. It’s like saying a modern APU from 2024 is equivalent to a Celeron from the early 90s; technically they’re in the same class, but one is much more complicated and powerful than the other.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        what’s new about these particular neural networks is the scale at which they operate.

        Sure. They’re larger language models. Although, they also (ostensibly) have better parsing and graphing algorithms around them.

        It’s the marriage of sophistication and scale that makes these things valuable. But it’s like talking about skyscrapers. Whether it’s the Effiel Tower, the WTC, or the Birch Kalif, we’re still talking about concrete and steel.

        It’s like saying a modern APU from 2024 is equivalent to a Celeron from the early 90s; technically they’re in the same class, but one is much more complicated and powerful than the other.

        I’d more compare it to a Cray from the 90s than a budget chipset like Celeron.

        But imagine someone insisting we didn’t have Supercomputers until 2020 because that’s when TMSC started cranking out 5nm chips in earnest.