Just a regular Joe.

  • 0 Posts
  • 15 Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle
  • The funny thing is, you rarely notice those who actually use it effectively in formulating comms, or writing code, or solving real world problems. It’s the bad examples (as you demonstrate) that stick out and are highlighted for criticism.

    Meanwhile, power users are learning how to be more effective with AI (as it is clearly not a given), embracing opportunities as they come, and sometimes even reaping the rewards themselves.


  • Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.

    I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.

    On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.


  • Joe@discuss.tchncs.detoTechnology@lemmy.world“ChatGPT said this” Is Lazy
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    22
    ·
    edit-2
    2 days ago

    It’s only a problem if they claimed it as their own or it didn’t add value, AND it wasted your time as a result.

    Sometimes the experts just know how to search more effectively in their domain (which nowadays is increasingly using the right context/prompt with some AI, and formerly known as Google-Fu before google search turned to shit)

    To be genuinely helpful and polite, they’ll do a little legwork to respond personally and accurately… others might be super busy, or just dicks who don’t respect you or your time.

    Try not to be that dick yourself, though. If you are asking someone for help, show your work and provide relevant info so they don’t waste their time.



  • Joe@discuss.tchncs.detoTechnology@lemmy.world“ChatGPT said this” Is Lazy
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    12
    ·
    edit-2
    2 days ago

    It is about respecting everyone’s time…

    Example, if an executive were to claim: “We don’t have any solution to X in the company” in an email as a justification for investment in a vendor, it might cost other people hours as they dig into it. However, if AI fact-checked it first by searching code repos, wikis and tickets, found it wasn’t true, then maybe that email wouldn’t have been sent at all or would have acknowledged the existing product and led to a more crisp discussion.

    AI responses often only need a quick sniff by a human (eg. click the provided link to confirm)… whereas BS can derail your day.

    We should share our knowledge and intelligence with AIs and people alike, and not ignorance. Use the tools at our disposal to avoid wasting others’ valuable time, and encourage others to do the same.


  • Joe@discuss.tchncs.detoTechnology@lemmy.world“ChatGPT said this” Is Lazy
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    25
    ·
    2 days ago

    Sure… copy & paste is copy & paste.

    However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.

    I am happy if someone uses AI first to come up with a coherent message, bug report, or question.

    I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.

    Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.






  • It’ll be temporary, a gut reaction to add more experienced engineers in the loop. These folks will try to codify and then push better checks/guardrails into CI/CD and tooling to save themselves time. Given how new this all is, it’s almost the blind leading the blind though.

    Amazon might also have some poor system boundaries, leading to non-critical systems/code impacting critical systems. Or they just let junior devs with their AI tools run wild on critical components without adequate guardrails… also likely. :-P