• Doomsider@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    4 days ago

    Naw, completely disagree. If you had a calculator you knew was defective you would ban doctors and lawyers from using it.

    You also seem to think that LLM is going to be inherently more accurate than a expert human. We can see with GrokAI how easy it is to manipulate an AI into saying racist white nationalist garbage. So we are not just trusting the technology but also a layer of unpredictable corporate meddling.

    Why does the LLM recommend this drug but not the other one? We quickly see how a corporation could favor a certain medication due to behind the scene deals or even push a medication.

    You can’t trust a black box you are not allowed to look into. Trust in a LLM at this point is pure folly.

      • raldone01@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        3 days ago

        Some of the SOTA models like gemini 3 pro are getting quite good at ballpark/estimations. I have fed it multiple complex formulas from my studies and some values. The end result is often quite close and similar in accuracy how I would do an estimation myself. (It is usually more accurate then my own ones.)

        Now I don’t argue there is any consciousness or magic going on. But I think the generalization that is going on is quite something! I have trained ai models for various robot control and computer vision tasks. Compared to older machine learning approaches transformers are very impressive, computationally accessible and easy to use. (In my limited experience)