• Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    114
    arrow-down
    5
    ·
    4 days ago

    Ignore the “containment” framing, they made a hacking bot and it seems to actually be good at finding and exploiting vulnerabilities:

    The AI model “found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world,” the company wrote.

    Dismiss this as marketing drivel all you want but hacking is just the sort of needle in a haystack problem that AI is very good at. It requires broad knowledge, a lot of cycles trying and failing, and is easily verifiable, ie. Can you execute arbitrary scripts or not. Even if this release is BS good hacking agents are bound to come eventually and we should be discussing the implications of that instead of burying our heads in the sand, pretending AI is useless and that this is all hype.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        It’s an arms race like any other. Cybersecurity has always been an arms race. You can’t stop developing security patches, cause adversaries will continue developing new exploits.

        If AI enables your adversaries to develop exploits faster than human developers can keep up with, then yeah AI will have to be a part of the solution. That doesn’t mean vibe-coding security patches, but it could mean AI-driven pen-testing.

        Just like quantum computing. You can call it useless and impractical all you want, but some day someone is going to use it to break conventional encryption. So it would behoove you to develop quantum capabilities now, so that you have quantum safe encryption before quantum-based exploits eventually arise, as they inevitably will…

    • redsand@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      AI exploit mining is one of the only things it’s good for. It doesn’t have to be accurate it just has to keep trying variations of common flaws and it has tons of training data on how the system is interconnected. we’re going to have so many RCEs and LPEs the next few years but people are also gonna burn 100k in tokens to find exploits worth 3k so efficiency will be interesting

    • VeloRama@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      I agree. Selling an AI that can find vulnerabilities in software is probably the second best thing after achieving AGI.

      “Nice software you’re selling there. Would be a shame if it was suddenly very unsafe to use, don’t you think?”

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      4 days ago

      I wrote an incredibly powerful “AI”. I call it the “Super Intelligent brute force password hacker”… It’s so smart that it knows almost every password. Humanity stands no chance.