…without informed consent.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Every now and then I see a guy barging in a topic bringing nothing else than “I asked [some AI service] and here’s what it said”, followed by 3 paragraphs of AI-gened gibberish. And then when it’s not well received they just don’t seem to understand.

    It’s baffling to me. Anyone can ask an AI. A lot of people specifically don’t, because they don’t want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.

    And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go “What’s the problem, is any of that wrong?”… Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      That’s my boss. He isn’t a programmer and I have done it professionally for 25 years, but he has taken to sending me not only feature requests but also pages of AI-generated code, and now he expects me to do the work instantly since I can just paste in what he sent me. He thinks he’s being helpful. I’ve asked him to leave the implementation to my team but he can’t help himself. I don’t know how you explain it to someone so bad at reading the room.

    • tias@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up

      That’s not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.

      Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It’s just a smarter search engine with no ads and better focus on the question asked.

      • SparroHawc@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        with no ads

        For now.

        Eventually it becomes a search engine that replaces the ads on the source material with its own ads, thus choking out the source’s funding and taking it for itself.

      • Barrymore@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        And what happens when mechahitler the next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn’t produce quality output like you’ve been conditioned to expect?

        These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestions AI hallucination.

        Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it’s giving you.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it’s giving you.

          This. I’ve had the AI provide me vendor documentation that said the opposite of what it says the doc says.

      • setVeryLoud(true);@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        Ok, I didn’t need you to act as a middle man to tell me what the LLM just hallucinated, I can do this myself.

        The point is that raw AI output provides absolutely no value to a conversation, and is thus noisy and rude.

        When we ask questions on a public forum, we’re looking to talk to people about their own experience and research through the lens of their own being and expertise. We’re all capable of prompting an AI agent. If we wanted AI answers, we’d prompt an AI agent.