25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: October 14th, 2024

help-circle

  • Who said I rely on it? I accept suggestions when they are good, even if the source of the suggestions is a slop generator. I accept what it is right about and reject what is wrong. And why not? It costs nothing.

    And, at 52, I write the way I write. I enjoy the process, I enjoy playing with language. I enjoy the juxtaposition of literary flourishes with a crude fuck you thrown in as punctuation and counterpoint to what might otherwise seem inaccessible or deliberately obtuse.

    But do you know what I’ve found? I can be a little overly self-indulgent. For example, you didn’t want all this, you just wanted to throw your glib little “lrn2write” and garner a few upvotes from the vehement AI haters and give yourself a self-righteous pat on the back.

    Sometimes I need another perspective to suggest restraint. As you can see, this, like 98% of my writing, is mine alone, else I’d’ve taken what would undoubtedly be good advice and held back on the more acerbic bits, and made sure I wasn’t posting some knee-jerk defensive self-indulgent 100% man-made slop.

    But here we are.





  • Seems to me libel would require AI to have credibility, which it does not.

    It’s a tool. Like most useful tools it can do harmful things. We know almost nothing about the provenance of this output. It could have been poisoned either accidentally or deliberately.

    But above all, the problem is ignorant people believing the output of AI is truth. It’s pretty good at some things, but the more esoteric the knowledge, the less reliable it is. It’s best to treat AI as a storyteller. Yeah there are a lot of facts in there but when they don’t serve the story they can be embellished. I don’t see the harm in just acknowledging that and moving on.


  • It’s AI. There’s nothing to delete but the erroneous response. There is no database of facts to edit. It doesn’t know fact from fiction, and the response is also very much skewed by the context of the query. I could easily get it to say the same about nearly any random name just by asking it about a bunch of family murders and then asking about a name it doesn’t recognize. It is more likely to assume that person is in the same category as the others and if the one or more of the names have any association (real or fictional) with murder.