This post provides arguments, asks questions, and documents some examples of Anthropic’s leadership being misleading and deceptive, holding contradictory positions that consistently shift in OpenAI’s direction, lobbying to kill and water down regulation so helpful that employees of all major AI companies speak out to support it, and violating the fundamental promise the company was founded on. It also shares a few previously unreported details on Anthropic leadership’s promises and efforts.
Anthropic has a strong internal culture that has broadly EA views and values, and the company has strong pressures to appear to follow these views and values as it wants to retain talent and the loyalty of staff, but it’s very unclear what they would do when it matters most. Their staff should demand answers.


Losers fighting other losers, you love to see it:
They mean “effective altruism” in a positive way, but their movement is better defined by two-faced ghouls. I guess Dario Amodei is the latest example, but Sam Bankman-Fried will always be #1 in my book.
Mikhail Samin doth protest too much.
The whole website is self-promo for a suspiciously rich, creepily public person. It’s two clicks to his sad dating profile, but only one click to discover he wants to disseminate a weird book by a man accused of sexual abuse in his cultlike compound.
What do the other Yudkowsky Rationalists think? Mostly they sniff their own farts, but one of them breaks out the racism too:
Dr Calipers also starts sharing DMs, which is also funny… Did I mention this is the top response on the website for people who pride themselves on being Rational with a capital R?