• 0 Posts
  • 25 Comments
Joined 26 days ago
cake
Cake day: February 5th, 2026

help-circle

  • They are not willing to let their current models (Claude) be used in fully autonomous weapons right now, because they believe today’s frontier AI is still too unreliable and prone to errors. They explicitly say they “will not knowingly provide a product that puts America’s warfighters and civilians at risk.”

    However, they have offered to work directly with the Department of Defense on R&D to improve the reliability of autonomous weapons technology in general (with our two requested safeguards in place) - so that in the future these systems might become safe and trustworthy enough to use.

    They’re not ideologically against autonomous weapons systems. They’re against ones that run on our current AI models.








  • Here’s the full quote including the parts you conveniently left out.

    Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.

    Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.

    Source







  • Just because the final output comes from AI doesn’t always mean a human didn’t put real effort into writing it. There’s a big difference between asking an LLM to write something from scratch, telling it exactly what to say, or just having it edit and polish what you already wrote.

    A ton of my replies here - including this one - are technically “AI output,” but all the AI really did was take what I wrote, clean it up, and turn it into coherent text that’s easier for the reader to follow.

    spoiler

    Original text: Just because the final output is by AI doesn’t always mean human didn’t put effort into writing it. There’s a difference between asking LLM to write something, telling LLM what to write or asking it to edit something you wrote.

    A large number of my replies here, including this one, are technically “AI output” but all the AI did was go through what I wrote and try and turn it into coherent text that the is easy for the recipient to consume.



  • You don’t seem very interested in sticking to the topic, do you? This conversation has been all over the place, complete with ad-hominems, concern-trolling, red herrings, strawmen, gish galloping - as if you’re trying to break some kind of record.

    It’s pretty clear you’ve built up a cartoon-villain version of me in your head and now you’re fighting that imagined version like it’s real. I made a pretty simple claim about AGI, you’ve piled an entire story on top of it, and now you’re demanding I defend views I don’t even hold.

    I’ve been trying to have a good-faith conversation here, but if this is what you’re going to keep doing, then I’ll just move on.



  • So do you think Dyson Spheres are inevitable too?

    I’m less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.

    First you were implying that today’s AI would bring about AGI

    I’ve never made such a claim. I’ve been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It’s in no way obvious to me that LLMs are the path to AGI. They could be, but they don’t have to be. Either way, it doesn’t change my core argument.

    people you hold so dear

    C’moon now.


  • My argument is that we’ll incrementally keep improving our technology like we have done throughout human history. Assuming that general intelligence is not substrate dependent - meaning that what our brains are doing cannot be replicated in silicon - or that we destroy ourselves before we get there, then it’s just a matter of time before we create a system that’s as intelligent as we are: AGI.

    I already said that the timescale doesn’t matter here. It could take a hundred years or two thousand - doesn’t matter. We’re still moving toward it. It does not matter how slow you move. As long as you keep moving, you’ll eventually reach your destination.

    So, how I see it is that if we never end up creating AGI ever, it’s either because we destroyed ourselves before we got there or there’s something borderline supernatural about the human brain that makes it impossible to copy in silicon.