

I think a lot of it is still done by hand, and there is also synthetic data distilled from larger models of course.


I think a lot of it is still done by hand, and there is also synthetic data distilled from larger models of course.


That’s an excellent point! On that topic I recently listened to an interview of the founder of EleutherAI, who focuses on training small language models. She said they were able to train a 1B parameters reasoning model with 50K Wikipedia articles and carefully curated RL traces. The thing could run in your smartphone and is at parity with much larger models trained on trillions of tokens.
She also scoffed at Common Crawl and said it contained mostly cookies and porn. She had a kind of attitude like “no wonder the big labs need to slurp trillions of tokens when the tokens are such low quality”. Very interesting approach, if you understand french I can only recommend the interview.


To clarify : model collapse is a hypothetical phenomenon that has only been observed in toy models under extreme circumstances. This is not related in any way to what is happening at OpenAI.
OpenAI made a bunch of choices in their product design which basically boil down to “what if we used a cheaper, dumber model to reply to you once in a while”.
Honestly Claude is not that sycophantic. It often tells me I’m flat out wrong, and it generally challenges a lot of my decisions on projects. One thing I’ve also noticed on 4.6 is how often it will tell me “I don’t have the answer in my training data” and offer to do a web search rather than hallucinating an answer.