- 1 Post
- 12 Comments
just_another_person@lemmy.worldto
Technology@lemmy.world•Ars Technica Fires Reporter After AI Controversy Involving Fabricated QuotesEnglish
376·2 days agoArs is owned by Conde Nast who has multiple whistleblowers saying AI is being forced on them. Think that’s kind of relevant.
just_another_person@lemmy.worldto
Technology@lemmy.world•Ars Technica Fires Reporter After AI Controversy Involving Fabricated QuotesEnglish
83·1 day agoThen maybe they shouldn’t be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they sued all the AI companies for stealing content.
Whether there is a news article about it or not, these shitty tools are being shoved down everyone’s throats. From developers, to authors.
just_another_person@lemmy.worldto
Technology@lemmy.world•Ars Technica Fires Reporter After AI Controversy Involving Fabricated QuotesEnglish
4410·2 days agoThe problem with your attitude towards this is that these companies are forcing “AI” down everyone’s throat. It’s a requirement now to churn out more bullshit than humanly possible.
This person was simply fired because they didn’t catch the false information, and not because they used the tools forced upon them.
just_another_person@lemmy.worldto
Technology@lemmy.world•OpenAI strikes a deal with the Defense Department to deploy its AI modelsEnglish
1·5 days agoWHAT IN THE ACTUAL FUCK IS HAPPENING:https://www.opb.org/article/2026/02/27/openais-sam-altman-weighs-in-on-pentagon-anthropic-dispute/
just_another_person@lemmy.worldto
Technology@lemmy.world•Meta’s star AI scientist Yann LeCun plans to leave for own startupEnglish
0·4 months agoIt most certainly did not…because it can’t.
You find me a model that can take multiple disparate pieces of information and combine them into a new idea not fed with a pre-selected pattern, and I’ll eat my hat. The very basis of how these models operates is in complete opposition of you thinking it can spontaneously have a new and novel idea. New…that’s what novel means.
I can pointlessly link you to papers, blogs from researchers explaining, or just asking one of these things for yourself, but you’re not going to listen, which is on you for intentionally deciding to remain ignorant to how they function.
Here’s Terrence Kim describing how they set it up using GRPO: https://www.terrencekim.net/2025/10/scaling-llms-for-next-generation-single.html
And then another researcher describing what actually took place: https://joshuaberkowitz.us/blog/news-1/googles-cell2sentence-c2s-scale-27b-ai-is-accelerating-cancer-therapy-discovery-1498
So you can obviously see…not novel ideation. They fed it a bunch of trained data, and it correctly used the different pattern alignment to say “If it works this way otherwise, it should work this way with this example.”
Sure, it’s not something humans had gotten to get, but that’s the entire point of the tool. Good for the progress, certainly, but that’s it’s job. It didn’t come up with some new idea about anything because it works from the data it’s given, and the logic boundaries of the tasks it’s set to run. It’s not doing anything super special here, just very efficiently.
just_another_person@lemmy.worldto
Technology@lemmy.world•Meta’s star AI scientist Yann LeCun plans to leave for own startupEnglish
0·4 months agoNah, I’m just not going to write a novel on Lemmy, ma dude.
I’m not even spouting anything that’s not readily available information anyway. This is all well known, hence everybody calling out the bubble.
just_another_person@lemmy.worldto
Technology@lemmy.world•Meta’s star AI scientist Yann LeCun plans to leave for own startupEnglish
0·4 months ago🤦🤦🤦 No…it really isn’t:
Teams at Yale are now exploring the mechanism uncovered here and testing additional AI-generated predictions in other immune contexts.
Not only is there no validation, they have only begun even looking at it.
Again: LLMs can’t make novel ideas. This is PR, and because you’re unfamiliar with how any of it works, you assume MAGIC.
Like every other bullshit PR release of it’s kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It’s not that it is intelligent or making “discoveries”, it’s just moving really fast.
You feed it 102 combinations of amino acids, and it’s eventually going to find new chains needed for protein folding. The thing you’re missing there is:
- all the logic programmed by humans
- The data collected and sanitized by humans
- The task groups set by humans
- The output validated by humans
It’s a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM
Nothing at any stage if developed, is novel output, or validated by any models, because…they can’t do that.
just_another_person@lemmy.worldto
Technology@lemmy.world•Meta’s star AI scientist Yann LeCun plans to leave for own startupEnglish
0·4 months agoI sure do. Knowledge, and being in the space for a decade.
Here’s a fun one: go ask your LLM why it can’t create novel ideas, it’ll tell you right away 🤣🤣🤣🤣
LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.
I can already tell from your tone you’re mostly driven by bullshit PR hype from people like Sam Altman , and are an “AI” fanboy, so I won’t waste my time arguing with you. You’re in love with human-made logic loops and datasets, bruh. There is not now, nor was there ever, a way for any of it to become some supreme being of ideas and knowledge as you’ve been pitched. It’s super fast sorting from static data. That’s it.
You’re drunk on Kool-Aid, kiddo.
just_another_person@lemmy.worldto
Technology@lemmy.world•Meta’s star AI scientist Yann LeCun plans to leave for own startupEnglish
0·4 months agoLol 🤣 I’m SO EMBARRASSED. You’re totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.
I’ll never speak to this topic again since I’ve clearly been bested with your knowledge from a Google Blog.
just_another_person@lemmy.worldto
Technology@lemmy.world•Meta’s star AI scientist Yann LeCun plans to leave for own startupEnglish
0·4 months agoLLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.
The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.
just_another_person@lemmy.worldto
Technology@lemmy.world•Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech.English
1·7 months agoCopy themselves to what? Are you aware of the basic requirements a fully loaded model needs to even get loaded, let alone run?
This is not how any of this works…

A fucking moron who runs around calling everything a bit when you disagree with whatever the topic is.
It’s the new CyberTruck of online insecurity.
Hope that’s “good” enough for you.