• NottaLottaOcelot@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    6 hours ago

    I’m flabbergasted that they admit that ChatGPT said it, rather than copy-pasting it and pretending it’s their own work and hoping you don’t read it closely.

    Even plagiarism has become lazy these days. At least do me the respect of concocting a lie.

    • HereIAm@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      I have a work colleague who does the copy pasting. He asks me how I can tell when he’s using AI to write git commit messages when there’s a sudden spike in capitalised words, correct grammar, emojis, bullet points (and add in that the message sometimes has nothing to do with what’s in the changes). It’s infuriating when he uses it in a discussion. I thought he’s lack in skills to make himself understood was bad, but arguing essentially with a chatbot is so much worse.

    • Eranziel@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 hours ago

      Some people seem to use it as an appeal to authority. This only works if you think ChatGPT is an authority on anything, though.

      • NottaLottaOcelot@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        I suppose you’re right, which is odd to me as the phrase “ChatGPT says…” automatically makes me question the validity of the information

  • GaMEChld@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    A simulation is only as accurate as the person’s ability to rationalize. It should only be used by people who can already out think it, because you need to be able to challenge and correct it.

    • PhoenixDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      6 hours ago

      I’ve had a Google Home mini in my house for about 7 years now. I love it for quick answers when my partner and I are talking, especially sports. Asking a quick “Hey Google, how many goals does Alex Tuch have” and it just says it quickly and we continue our conversation without really stopping.

      But to actually get complex answers? Both my partner and I are highly intelligent people. We can find anything we need to. The last thing we’ll ever fucking do is even trust AI to get it right, let alone be the source of our information.

      Shit, even my Google home has fucked up sports stats because AI is dumb as shit.

  • bthest@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    9 hours ago

    When you let AI do your talking for your then you are voluntarily making yourself redundant.

    BTW your chatbot is no Cyrano de Bergerac. It does not fool others nearly as much as you think it does. And the more you use it the more “smell blind” you become to it, just like someone who has no idea they reek because they’re brain has filtered it long ago. Your use of AI becomes more and more obvious and cringe.

  • leriotdelac@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    11 hours ago

    It’s the same as “Google said this”. Before AI, Google could say nothing, it’s a search engine. Same with gpt - it’s a tool to access information from different sources.

    Just having information out in the Internet / on a search index / accessed by LLM doesn’t make it relevant or credible…

    And what buffles me: it’s pretty easy to set up gpt to cite sources and provide the links, filtering through sources that a user trusts. Why neither of my friends do it? Why “gpt said” is even an argument in a discussion?..

    • Dozzi92@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 hours ago

      Except people just straight up copy paste gpt output. At the very least people would say “I googled and got this result and that result.” We’ve taken what was minimal work and made it minimaler.

  • neclimdul@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    18 hours ago

    A lot of times I feel like its more than lazy, its rude.

    Either its something I’m supposed to know and you think I’m dumber than chatgpt or to dumb to look it up myself.

    Or it’s something you’re supposed to know and don’t think I’m worth the time to give me your opinion.

    Either way, feels like a fuck you.

    • brotato@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      I 100% agree. To me it sort of feels like that old “Let me Google that for you” website. Like I wouldn’t have asked you something if I wanted you to just prompt ChatGPT. I want your informed opinion. But I guess informed opinions are hard to come by these days.

    • Zeddex@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 hours ago

      Yep. Someone on another team at my work does this constantly.

      Them: I’m having a problem with x

      Me: Ok, do this

      Them: But Copilot said…

      Then why are you even asking me? Stop bothering me and wasting my time.

  • TryingSomethingNew@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    115
    arrow-down
    2
    ·
    23 hours ago

    I’m getting that more and more. “I asked ChatGPT and it said”. Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.

    Make sure they know they just lost input right ms the next time. No, I don’t ask Harry, he just quoted GPT last time, and I’d already asked it this time so there was no reason to involve him. Nothing worse for a lead than people not wanting them to lead because they’ve abdicated the job to spicy autocorrect.

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      18 hours ago

      I think this is the way. A certain number of times of “[coworker] wasn’t asked because they only respond with LLMs, so I just ask the LLMs directly. I am not sure what [coworker]’s expertise is anymore, I just don’t consult them” and I suspect coworker may in fact stop responding with LLMs.

        • AliasAKA@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          In my experience it is obvious. Calling people on it also makes them feel embarrassed usually. I put something like “I can just ask an LLM myself if I wanted this output. Please provide your own commentary.” If I were a manager and I had an employee just copy pasting that kind of output, I’d probably wonder if that employee actually contributes anything.

    • Zos_Kia@jlai.lu
      link
      fedilink
      English
      arrow-up
      35
      ·
      22 hours ago

      Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.

      To me it’s like sending the “let me google that for you” link to answer a question. It’s just bad form. I don’t want your whole reasoning trace man, i just want to know what you understand of it and maybe you’ll catch some detail i’m missing or whatever. It’s simple, i won’t read LLM output, my colleagues know it and i get shit for it but no i am not digesting this material for you. Give me a 3 bullet-point version in your own words, the point is not just in the data exchange it’s also to make sure you are aware of the answer and we have a common truth.

      Or failing that, just give me the fucking prompt and at least i’ll know if you understand the question.

  • d00ery@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    edit-2
    21 hours ago

    Someone literally copy and pasted a whole ChatGPT comment in an email reply to some questions I’d asked them. I was somewhat insulted.

    • NekoKoneko@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      18 hours ago

      You’re right to feel insulted. LLMs are verbose and unreliable often enough that you have to check any work that comes out (or be negligent).

      So what’s usually happening is someone is saving their time by spending yours. They saved the time normally needed to write a thoughtful reply by shifting the time and cognitive cost of reading and verifying to you, with AI as an excuse (often not without condescension, which is a type of “virtue signaling” driven by c-suite AI boosting). The slop output looks like “work product,” but is neither - it took no work and is a facade of a “product” because it’s unverified.

      They are being selfish, and it is objectively an insulting act.

    • Armok_the_bunny@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      17 hours ago

      Put them on a list where any and every email they send you gets fed into GPT and replied to without you ever reading it, then to make sure they know that explain what’s happening in the signature.

    • Joe@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      20
      ·
      edit-2
      20 hours ago

      It’s only a problem if they claimed it as their own or it didn’t add value, AND it wasted your time as a result.

      Sometimes the experts just know how to search more effectively in their domain (which nowadays is increasingly using the right context/prompt with some AI, and formerly known as Google-Fu before google search turned to shit)

      To be genuinely helpful and polite, they’ll do a little legwork to respond personally and accurately… others might be super busy, or just dicks who don’t respect you or your time.

      Try not to be that dick yourself, though. If you are asking someone for help, show your work and provide relevant info so they don’t waste their time.

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    20 hours ago

    I got this response from a 70+ Catholic Priest. Quite literally nothing in this world is sacred or real anymore.

    • ulterno@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      20 hours ago

      Considering that despite going over lvl70, he decided with Catholic Priest instead of Saint,Warlock or Archmage, it should already be making you question his decision making ability.

  • RegularJoe@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    24 hours ago

    ChatGPT isn’t on the team.

    Except that when someone pastes “ChatGPT thinks that {wall of AI-generated text}”

    That person put ChatGPT on the team. And if there was no human input, the competition is free to use that and mock it word for word. Use fear, uncertainty, and doubt to convince your team that anyone can use that, including your competition, if it is published.

    The U.S. Copyright Office’s January 2025 report on AI and copyrightability reaffirms the longstanding principle that copyright protection is reserved for works of human authorship. Outputs created entirely by generative artificial intelligence (AI), with no human creative input, are not eligible for copyright protection.

    https://natlawreview.com/article/copyright-offices-latest-guidance-ai-and-copyrightability

  • Joe@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    25
    ·
    23 hours ago

    Sure… copy & paste is copy & paste.

    However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.

    I am happy if someone uses AI first to come up with a coherent message, bug report, or question.

    I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.

    Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.

    • wpb@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      I didn’t read your comment, but deepseek said this:

      Well said. You’ve nailed the key distinction: AI as a thought amplifier vs. thought substitute. The value depends entirely on the user’s foundation of knowledge. Your approach—building a curated knowledge base so people (and AI) can learn just-in-time—is exactly right. It sets everyone up for success by grounding the AI in truth. Smart strategy.

      I haven’t read this either but I hope it helps.

      • Joe@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        35 minutes ago

        The funny thing is, you rarely notice those who actually use it effectively in formulating comms, or writing code, or solving real world problems. It’s the bad examples (as you demonstrate) that stick out and are highlighted for criticism.

        Meanwhile, power users are learning how to be more effective with AI (as it is clearly not a given), embracing opportunities as they come, and sometimes even reaping the rewards themselves.

    • Domi@lemmy.secnd.me
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      I am happy if someone uses AI first to come up with a coherent message, bug report, or question.

      LLMs do not add anything of value to bug reports, they add unecessary padding requiring me to filter out the marketing speech to get down to the issue. I would much rather have the raw brain dump of theirs.

      If somebody sends me their ChatGPT text I now ask them to send me their prompt instead so I don’t have to waste my time on their lengthy text that has the same amount of information as the original.

      I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.

      Being coherent is rarely the problem in bug reports, it’s the user not properly typing out what the actual issue is.

      I have gotten bullet point list bug reports that read like they were written by an insane person that were more useful than a nicely written ChatGPT message with 0 information in it.

      • Joe@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.

        I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.

        On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 hours ago

        It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).

        • Truscape@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          20 hours ago

          I believe it’s just complexity and token/compute usage.

          You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).

          It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.

        • magnetosphere@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          20 hours ago

          I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        19 hours ago

        That doesn’t seem like a solvable thingy.
        People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.

        • magnetosphere@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          19 hours ago

          Yeah, but we’ve known that about people since forever. Computers are expected to be reliable.

          If hallucinations aren’t a solvable problem, then either AI is impossible, or we’re going about it the wrong way.

          • ulterno@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            18 hours ago

            AI is pretty much possible, we are thinking about it the wrong way.

            We are expecting AI to have the 3 bests of both worlds.

            • High I/O ability : we have that from computers
            • Determinism and Correctness : computers always had a high level determinism, never correctness because a computer does not know what is correct[1]
            • Intelligence and thought : intelligence is a perception. AI will always have a lower depth of thought than us as long as it is dependent upon us

            So we only get 1 best of the other world. In turn for some of this (person) world, we have to deal with 1 worst of the computer world. We lose determinism, because we rely upon the model being a higher level of fuzzy.

            Of course, I don’t mean “determinism” in the exact and full meaning. The LLM is still made on top of a computer, so for the same internal saved state and the same external input (including any randomising functions that might be used), the output will still be the same. But you can’t get the kind of logical determinism that you expect from normal computer operations.
            A dumbed down example to get my thoughts across: You can use either of a + b or ADD(A,B) or SUM(A:B) and will still get the same result.


            1. this boils down to the same thing that one person once said to some computer guy - ‘If I enter the wrong numbers, will I still get the correct answer?’ ↩︎

    • dustycups@aussie.zone
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      21 hours ago

      …fact check claims

      Risky use-case. Besides, why bother when you have to fact check the fact checker.

      • Joe@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        12
        ·
        edit-2
        20 hours ago

        It is about respecting everyone’s time…

        Example, if an executive were to claim: “We don’t have any solution to X in the company” in an email as a justification for investment in a vendor, it might cost other people hours as they dig into it. However, if AI fact-checked it first by searching code repos, wikis and tickets, found it wasn’t true, then maybe that email wouldn’t have been sent at all or would have acknowledged the existing product and led to a more crisp discussion.

        AI responses often only need a quick sniff by a human (eg. click the provided link to confirm)… whereas BS can derail your day.

        We should share our knowledge and intelligence with AIs and people alike, and not ignorance. Use the tools at our disposal to avoid wasting others’ valuable time, and encourage others to do the same.