• queermunist she/her@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      12 hours ago

      More a problem with the marketing, right? Imagine if guns were marketed as safe and helpful back scratchers, and then someone shoots themselves because they used the gun to scratch their back.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          11 hours ago

          Courts generally agree that a reasonable person could believe claims made in official promotional material. That’s why it’s not legal to outright lie in marketing and they need to go through so much trouble to properly word their statements so that they’re technically true. In this case, they’re just lying. They’re saying the AI is safe to use for these tasks and it is not.

    • surewhynotlem@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      12 hours ago

      So you’re saying it’s a tool designed to be used by anyone, including idiots, and is dangerous in the hands of idiots. And we as a society should do better to make sure this potentially dangerous tool shouldn’t be used by idiots.

      Yep, agree.

    • artyom@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      15
      ·
      edit-2
      12 hours ago

      Uhhh not really. Guns don’t just go off by themselves.

      ITT: nerds who have never held a gun in their life.

      • KiloGex@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        12 hours ago

        I mean they do sometimes without the proper safety protocols in place, but you still blame the user in the end.

          • thebestaquaman@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            ·
            12 hours ago

            I mean, there’s a good reason the first rules of firearm safety are to always treat a weapon as loaded, and to never direct the weapon at something you aren’t prepared to destroy. The key point being that you never know when some freak accident can happen with a loose pin, bad ammo, a broken spring, or just a person tripping and shaking the gun a bit too hard.

            A gun should never go off by itself. You still treat it as if it can, because in the real world freak accidents happen.

            • artyom@piefed.social
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              4
              ·
              12 hours ago

              Sure. The point is it’s entirely possible to use a firearm safely. There is no safe use for LLMs because they “make decisions”, for lack of a better phrase, for themselves, without any user input.

              • etchinghillside@reddthat.com
                link
                fedilink
                English
                arrow-up
                9
                arrow-down
                1
                ·
                11 hours ago

                That is not at all how LLMs work. It’s the software written around LLMs that aide it in constructing and running commands and “making decisions”. That same software can also prompt the user to confirm if they should do something or sandbox the actions in some way.

                  • suicidaleggroll@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    5
                    arrow-down
                    1
                    ·
                    edit-2
                    11 hours ago

                    Only if the user has configured it to bypass those authorizations.

                    With an agentic coding assistant, the LLM does not decide when it does and doesn’t prompt for authorization to proceed. The surrounding software is the one that makes that call, which is a normal program with hard guardrails in place. The only way to bypass the authorization prompts is to configure that software to bypass them. Many do allow that option, but of course you should only do so when operating in a sandbox.

                    The person in this article was a moron, that’s all there is to it. They ran the LLM on their live system, with no sandbox, went out of their way to remove all guardrails, and had no backup. The fallout is 100% on them.

          • 4am@lemmy.zip
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            10 hours ago

            “Guns are foolproof”

            You should have yours taken away.

            • artyom@piefed.social
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              10 hours ago

              They are not foolproof. They will absolutely cause problems in the hands of a fool. But they will not cause problems all on their lonesome. They’re inanimate objects. They cannot do absolutely anything without interaction from the user. If you can’t understand this, you should never be allowed to own one.

              • Bluescluestoothpaste@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                9 hours ago

                And neither can anthropic claude. Claude isn’t randomly deleting people’s websites, the kid gave anthropic bad instructions, it didn’t spontaneously decide anything. This is like an idiot pointing a gun at something he didn’t want destroyed and sneezing causing a trigger squeeze and then trying to blame the gun manufacturer.

                • artyom@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  8 hours ago

                  the kid gave anthropic bad instructions

                  LOL and you know this how?

                  This is like an idiot pointing a gun at something he didn’t want destroyed

                  No, this is more like pointing a gun downrange and then the gun fires itself and the bullet decides to do a U-turn and shoots the user.

                  • wonderingwanderer@sopuli.xyz
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    7 hours ago

                    Not really.

                    If you have the agent installed, it’s like having your gun assembled.

                    If you have your agent enabled, it’s like having your gun loaded.

                    If you give your agent permissions, it’s like taking your gun off safety.

                    If you don’t have your agent properly sandboxed, it’s like having bad muzzle control.

                    And if your agent is actively running, it’s like having your finger on the trigger.

                    This breaks every weapon safety rule. That’s how you get a negligent discharge.

                    Hence, it’s like scratching your back with a loaded weapon.