• earmuff@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    7
    arrow-down
    5
    ·
    5 months ago

    So what alternatives to ChatGPT do exist? I‘m currently a premium ChatGPT user and would like to switch to another service. I don‘t care that super much about privacy, but will obviously not use OpenAI products anymore

      • Uriel238 [all pronouns]@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 months ago

        LLMs are less magical than upper management wants them to be, which is to say they won’t replace the creative staff that makes art and copy and movie scripts, but they are useful as a tool for those creatives to do their thing. The scary thing was not that LLMs can take tons of examples and create a Simpsons version of Cortana, but that our business leaders are super eager to replace their work staff with the slightest promise of automation.

        But yes, LLMs are figuring in advancements of science and engineering, including treatments for Alzheimer’s and diabetes. So it’s not just a parlor trick, rather one that has different useful applications that were originally sold to us.

        The power problem (LLMs take a lot of power) remains an issue.

        • TheOubliette@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          5 months ago

          I’m unaware of any substantial research on Alzheimer’s or diabetes that has been done using LLMs. As generative models they’re basically just souped up Markov chains. I think the best you could hope for is something like a meta study that is probably a bit worse than the usual kind.

          • earmuff@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            3
            ·
            5 months ago

            I agree, things that occure the most in the training data set will have the highest weights/probabilities in the Markov chain. So it is useless in finding the one, tiny relation that humans would not see.

      • earmuff@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        9
        ·
        5 months ago

        I understand the science behind those LLM‘s and yes, for my use cases it has been very useful. I use it to cope with emotional difficulties, depression, anxiety, loss. I know it is not helping me the same as a professional would. But it helps me to just get another perspective on situations, which then helps me to understand myself and others better.

        • TheOubliette@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          5 months ago

          Oh that’s totally valid. Sometimes we just need to talk and receive the validation we deserve. I’m sorry we don’t have a society where you have people you can talk to like this instead.

          I haven’t personally used any of the offline open source models but if I were you that’s where I’d start looking. If they can be run inside a virtual machine, you can even use a firewall to ensure it never leaks info.

          • Ilandar@aussie.zone
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            5 months ago

            Totally valid? Getting mental health advice from an AI chatbot is one of the least valid use cases. Speak to a real human @earmuff@lemmy.dbzer0.com, preferably someone close to you or who is professionally trained to assist with your condition. There are billions of English speakers in the world, so don’t pretend we live in a society where there’s “no one to talk to”.

            • TheOubliette@lemmy.ml
              link
              fedilink
              arrow-up
              4
              ·
              5 months ago

              They have already stated that they think they should be speaking to someone but are clearly having a hard time. If a chatbot is helping them right now I’m not going to lecture them about “pretending”. I recommend the approach of a polite and empathetic nudging when someone is or may be in crisis.

              • Ilandar@aussie.zone
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                5 months ago

                You literally just encouraged them to continue using a chatbot for mental health support. You didn’t nudge them anywhere.

                • TheOubliette@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  5 months ago

                  I was going to let them reply first. You are being rude and dismissive of them, however. Please show your fellow humans a bit more empathy.

                  • Ilandar@aussie.zone
                    link
                    fedilink
                    arrow-up
                    2
                    arrow-down
                    2
                    ·
                    5 months ago

                    There is nothing “dismissive” about offering advice to people who clearly need it. In actual fact, you are the one who was dismissive of the issue here by offering some cowardly “feel good” reply instead of opening up and sharing your honest thoughts. Stop tiptoeing around issues and enabling harmful behaviours. Relying on AI chatbots for mental health advice is very dangerous, and it’s absolute madness to encourage this as a primary form of treatment when you are seemingly aware of the dangers yourself.

            • earmuff@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              5 months ago

              I think you need to chill - please don‘t be triggered by me having an option to make me feel better at the end of the day.

              Instead of assuming, you could also just ask. I am using ChatGPT complementary to a mental health professional. Both help me. ChatGPT is here 24/7 and helps me with difficult situations immediately. The mental health professional is then here to solve the problem in a therapeutic way.

              Both help me.

              • Ilandar@aussie.zone
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                That’s good, I’m glad to hear you’re getting professional treatment since your original statement indicated the opposite:

                I know it is not helping me the same as a professional would.

    • oldfart@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I’ve been testing Claude the last month. It’s good for 90% of the tasks but for the remaining 10% i couldn’t convince it to give a proper answer and used ChatGPT instead. Technical questions and coding is what I use llms for.