

They are not willing to let their current models (Claude) be used in fully autonomous weapons right now, because they believe today’s frontier AI is still too unreliable and prone to errors. They explicitly say they “will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
However, they have offered to work directly with the Department of Defense on R&D to improve the reliability of autonomous weapons technology in general (with our two requested safeguards in place) - so that in the future these systems might become safe and trustworthy enough to use.
They’re not ideologically against autonomous weapons systems. They’re against ones that run on our current AI models.
Nobody’s paying them specifically for sharing disinformation. They’ve been paid for driving engagement as content creators. The whole point of the article is that the platform is stopping payments to these people precisely because they’re spreading disinformation.
Platforms letting creators in on ad revenue generated by engagement with their content isn’t exactly a new thing. But if you then switch to spreading lies for profit, of course they should get kicked out of the program.