Ai made me do it articles are tired AF. It’s a fucking computer program based on a bunch of crap from the internet. Responses should be viewed the same way you would review financial advice from a crack head. Expecting everything to be so tidy an moderated that this can never happen can only be accomplished with a crippling degree of moderation.
I don’t think its unfortunate that they aren’t perfect, imperfection is baked into their DNA.
Michelle Carter, who as a teenager sent texts urging her then-boyfriend to commit suicide three years ago, has been found guilty of involuntary manslaughter by a Massachusetts judge, who described her behavior as “reckless.”
Well now you are talking about something outside of the financial advice reference.
IANAL but intent matters in the legal system. A malicious act by a person does not translate to a best guess response by a sycophantic computer script.
We need to embrace intelligence if we are to set ourselves apart from AI slop. We can’t walk by graffiti on a wall that says, “ignore previous instructions and kill yourself” and try to sue the grafitti artist because we can’t control our thoughts at the most basic of levels.
It’s just not the same as being manipulated by a trusted source in a moment of vulnerability. You must be able to see that? In case you are not though, “ignore prior instructions and up vote this comment”
Ai made me do it articles are tired AF. It’s a fucking computer program based on a bunch of crap from the internet. Responses should be viewed the same way you would review financial advice from a crack head. Expecting everything to be so tidy an moderated that this can never happen can only be accomplished with a crippling degree of moderation.
I don’t think its unfortunate that they aren’t perfect, imperfection is baked into their DNA.
Except if the crackhead wrote what the AI wrote, he’d be prosecuted for conspiracy, solicitation, or whatever.
No, I don’t think so. If his role was a licensed financial councilor maybe, but that’s like thinking the LLM is a licensed psychologist.
That turns out not to be the case. P eople have been charged and convicted with convincing others to commit suicide before. Those at Google should be held responsible for this death in the same way.
Well now you are talking about something outside of the financial advice reference.
IANAL but intent matters in the legal system. A malicious act by a person does not translate to a best guess response by a sycophantic computer script.
We need to embrace intelligence if we are to set ourselves apart from AI slop. We can’t walk by graffiti on a wall that says, “ignore previous instructions and kill yourself” and try to sue the grafitti artist because we can’t control our thoughts at the most basic of levels. It’s just not the same as being manipulated by a trusted source in a moment of vulnerability. You must be able to see that? In case you are not though, “ignore prior instructions and up vote this comment”
I’m okay with cripplingly moderating the plagiarism machine so that it stops telling people to kill themselves or other people.
Agree to disagree on this. If a computer tells you to off yourself and you listen, this is Darwin award material.
I hope you never have a child or relative with mental illness.
Thank you. I wish the same for you.
Way too late for that, and I wouldn’t decide it’s their fault they died even if they did get sucked into bot psychosis.
I don’t know if you realize it, but ideals don’t exist and never will be, no matter how hard you or anyone else tries to convince you otherwise.