Altman’s remarks in his tweet drew an overwhelmingly negative reaction.

“You’re welcome,” one user responded. “Nice to know that our reward is our jobs being taken away.”

Others called him a “f***ing psychopath” and “scum.”

“Nothing says ‘you’re being replaced’ quite like a heartfelt thank you from the guy doing the replacing,” one user wrote.

  • AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 hours ago

    When people are complaining about AI, it’s often the scale of it they have beef with: the fact that it’s being shoved into their face everywhere they look, mandated for use in their job by management, even if it does not make them more productive. A consequence of it being shoved everywhere are the larger problems that make people angry, such as the excessive resource use by AI data centres.

    I agree that LLMs are here to stay — I understand enough about how the tech works that I know that there is tremendous potential for their use (I originally got into learning about machine learning because I wanted to better understand AlphaFold, a protein structure prediction model made by Google Deepmind (not sure I’d count this as an LLM, but under the hood, it works pretty similarly)). However, the problem of AI is more about how the technology is functioning at a societal level than a purely technological problem.

    I believe that the current societal impact of the AI boom far exceeds the actual technological impact of LLMs. Whilst I get your point about the dotcom bubble analogy, I think that in that case, the ratio of “harms caused by the dotcom bubble” to “genuine societal impact of the technology once the bubble has popped” is much smaller. I grant that we have the benefit of hindsight with the internet, because the tech has had so much time to mature and become integrated with society, whereas we’re still in the middle of the AI hype bubble, but I don’t believe that LLMs/AI are capable of being anywhere near as transformative to society as the internet. There may be niche fields that are overturned or even functionally destroyed, but there are few genuine use-cases of LLMs. They’ll still exist after the bubble has popped, and they’ll have their uses, but I don’t believe they’ll be anywhere near as ubiquitous as they are now.

    Regardless of whether you agree with me on this, one thing we are in accord with is that the bubble is bullshit and harmful. Personally, something that frustrates me with it is that I am genuinely curious to see genuine progress in the real use cases for LLMs — I’m open to the possibility that in 10-20 years time, my predictions in my previous paragraph may have been proven to be wrong. However, the bubble is just delaying that kind of meaningful integration into society, as well as hindering areas of research that could improve LLMs

    (as well as crowding out other areas of AI research that are based on different architectures and methods, which may get us much closer to the sci-fi sense of AI than LLMs ever could. Song-Chun Zhu is an example of a researcher who used to work in this field of AI, but got burnt out by how the economic pressures on research meant that it was hard to do research that wasn’t based on this one dominant method. He’s one of many who is nowadays more interested in researching AI in a “small-data for big tasks” paradigm)