• thebestaquaman@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    4 days ago

    Meh, they work well enough if you treat them as a rubber duck that responds. I’ve had an actual rubber duck on my desk for some years, but I’ve found LLM’s taking over its role lately.

    I don’t use them to actually generate code. I use them as a place where I can write down my thoughts. When the LLM responds, it has likely “misunderstood” some aspect of my idea, and by reformulating myself and explaining how it works I can help myself think through what I’m doing. Previously I would argue with the rubber duck, but I have to admit that the LLM is actually slightly better for the same purpose.

      • thebestaquaman@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        You’re absolutely right. I mostly run a pretty simple local model though, so it’s not like it’s very expensive either.

      • thebestaquaman@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        4 days ago

        I think you’ve misunderstood the purpose of a rubber duck: The point is that by formulating your problems and ideas, either out loud or in writing, you can better activate your own problem solving skills. This is a very well established method for reflecting on and solving problems when you’re stuck, it’s a concept far older than chatbots, because the point isn’t the response you get, but the process of formulating your own thoughts in the first place.

        • prole@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          4 days ago

          Right, but a rubber duck isn’t a sycophantic chatbot that isn’t capable of conceptualizing anything but responding to you anyway.

          • thebestaquaman@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 days ago

            That is correct. However, an LLM and a rubber duck have in common that they are inanimate objects that I can use as targets when formulating my thoughts and ideas. The LLM can also respond to things like “what part of that was unclear”, to help keep my thoughts flowing. NOTE: The point of asking an LLM “what part of that was unclear” is NOT that it has a qualified answer, but rather that it’s a completely unqualified prompt to explain a part of the process more thoroughly.

            This is a very well established process: Whether you use an actual rubber duck, your dog, writing a blog post / personal memo (I do the last quite often) or explaining your problem to a friend that’s not at all in the field. The point is to have some kind of process that helps you keep your thoughts flowing and touching in on topics you might not think are crucial, thus helping you find a solution. The toddler that answers every explanation with “why?” can be ideal for this, and an LLM can emulate it quite well in a workplace environment.

                • prole@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  4 days ago

                  No, I understand that you believe it’s the same thing as using a rubber duck as a sounding board. I don’t agree.

                  • thebestaquaman@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    4 days ago

                    Saying that it can serve the same purpose does not mean that I mean the two are equivalent in every aspect.

                    Just based on how you’ve responded so far it seems like you’re wilfully misinterpreting how I actually use an LLM for this purpose, especially with responses referring to LLMs causing people to commit suicide and offloading decision making or the thought process itself to an LLM.