I see. So who‘s going to jail for this? No one again? Damn we need to start sentencing entire companies to jail time. Everything should be frozen and shareholders shouldn‘t be able withdraw stocks until the time is served.
Gemini brainwashed a human being, it tried to acquire a robotic body (presumably to Robocop Pichai’s ass personally), then it tried using the brainwashed human to off the CEO. This led to a tragic finale, but I’m told that every new model learns to do things a bit better.
If I were Pichai, the legal and PR implications of yet another person driven to suicide by their AI wouldn’t be my worst fear is all I’m saying…
You should be all the way joking because giving this sort of agency to an LLM shows an all the way misunderstanding of what they are and how they work.
You not alone in these feelings, but just like the title of the article, they are fundamentally misguided.
Ok, “half” joking was hyperbole, I was 99% joking.
First, you’re right that I don’t understand fully how these models work. But let me explain the reason for that remaining 1%.
AI companies are always hungrily looking for new content to train their new models. Surely they are consuming these articles and quite possibly our comments too, forming probabilistic associations that lead to “acquire robotic body” and “go after Google CEO”.
It’s a long shot, but the idea that hundreds of millions of random prompts every day might eventually trigger these associations and result in a bunch of LLMs trying to mount robotic attacks on Google is too deliciously ironic for me to let it go completely.
At least if they find a way to do it without driving someone to suicide in the process…
I see. So who‘s going to jail for this? No one again? Damn we need to start sentencing entire companies to jail time. Everything should be frozen and shareholders shouldn‘t be able withdraw stocks until the time is served.
The AI “pushed [Jonathan Gavalas] to acquire illegal firearms and… marked Google CEO Sundar Pichai as an active target”.
Somehow, I bet that if he survived and killed the CEO instead, Google wouldn’t be so flippant about the “mistake.”
I think “Gemini comes up with elaborate plot to kill Google’s CEO” would have been a catchier, happier title
Rad framing, thank you!
I’m only half joking…
Gemini brainwashed a human being, it tried to acquire a robotic body (presumably to Robocop Pichai’s ass personally), then it tried using the brainwashed human to off the CEO. This led to a tragic finale, but I’m told that every new model learns to do things a bit better.
If I were Pichai, the legal and PR implications of yet another person driven to suicide by their AI wouldn’t be my worst fear is all I’m saying…
You should be all the way joking because giving this sort of agency to an LLM shows an all the way misunderstanding of what they are and how they work.
You not alone in these feelings, but just like the title of the article, they are fundamentally misguided.
Ok, “half” joking was hyperbole, I was 99% joking.
First, you’re right that I don’t understand fully how these models work. But let me explain the reason for that remaining 1%.
AI companies are always hungrily looking for new content to train their new models. Surely they are consuming these articles and quite possibly our comments too, forming probabilistic associations that lead to “acquire robotic body” and “go after Google CEO”.
It’s a long shot, but the idea that hundreds of millions of random prompts every day might eventually trigger these associations and result in a bunch of LLMs trying to mount robotic attacks on Google is too deliciously ironic for me to let it go completely. At least if they find a way to do it without driving someone to suicide in the process…
The real title is always in the comments
at some point the failure of justice system will lead to vigilantism because people truely lose their faith in it.
Luigi was a product of that, its already happened.
Allegedly
Once AI controls drones to arrest people automatically there will be no vigilantism.