A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    Well, there goes the AI evangelist claim of “democratizing” literally anything. Instead, it gives increasingly BS answers based on your social status already.

    Everybody brace yourselves for the cope, which will probably be a class-based version of “you’re prompting it wrong” or somesuch trash.

    • tias@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      I mean… isn’t it just logical that if you express yourself ambiguously, you are more likely to get a poor response? Humans and chatbots alike need clarity to respond appropriately. I don’t think we can ever expect things to work differently.

      • Joe@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        I agree. What you get with chatbots is the ability to iterate on ideas & statements first without spreading undue confusion. If you can’t clearly explain an idea to a chatbot, you might not be ready to explain it to a person.

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          23 days ago

          It’s not the clarity alone. Chatbots are completion engines, and responds back in a way that feels cohesive. It’s not that a question isn’t asked clearly, it’s that in the examples the chatbot is trained on, certain ties of questions get certain types of answers.

          It’s like if you ask a ChatGPT what is the meaning of life you’ll probably get back some philosophical answer, but if you ask it what is the answer to life, the universe, and everything, it’s more likely to say 42 (I should test that before posting but I won’t).

          • Joe@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            23 days ago

            Indeed. Additional context will influence the response, and not always in predictable ways… which can be both interesting and frustrating.

            The important thing is for users to have sufficient control, so they can counter (or explore) such weirdness themselves.

            Education is key, and there’s no shortage of articles and guides for new users.