• lemmydividebyzero@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    They released a version recently that fixed over 60 security vulnerabilities. All of them were high or critical.

    How many more are there to find? Thousands?

    Whoever uses this on a PC with anything useful on it, is absolutely insane.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    Yep that’s about the level of intelligence I would expect from Meta’s AI safety director.

    Doing the one thing that you’re never supposed to do, letting an AI loose on anything sensitive.

    For her next trick she’s going to run while holding scissors in one hand and a bottle of boiling acid in the other. What could go wrong.

  • hansolo@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    I love so much that there are real, hilarious consequences for overzealous early adoption. You can’t make this shit up.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      These people aren’t early adopters. These people are doing the equivalent of putting a lump of uranium in a bucket, and calling it a nuclear reactor.

      AI is our version of the demon core, and these idiots are dicking around with it with zero safety precautions.

      Meanwhile the rest of us are just smart enough to not go in that room.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 days ago

      Problem:

      This is the exact same kind of shit being used to automate prioritize and execute military kill-chains.

      Basically: Finda target, tell others about the target, assess nearby firepower capable of neutralizing the target, determine best course of action.

      … all we have to do is cross that last step over into ‘and then execute that course of action’.

      All the drone warfare in Ukraine?

      EM jamming and literally hacking the things or their CnC systems is an effective counter, in certain situations.

      So, how do you counter that?

      One solution is keep an actual thin wire, like a TOW missile, connecting the operator and the drone. Gotta be a real long wire though.

      Other solution?

      Make the drone fully autonomous once its been locked in to a specific plan.

      Don’t worry though, I’m sure Pete Hegseth will navigate this tightrope about as well as traffic stop line walk test.

  • RedstoneValley@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    Can someone explain to mr why these people are buying Mac Minis to run this in a “safe” environment and then they go on and connect it to the internet and give the AI credentials to all their cloud accounts? This seems excessively moronic to me? Am I missing something?

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      No, you’re not missing anything.

      They’re morons.

      Thats our ruling elite; a bunch of fucking morons with egos and low self awareness at best, literally child raping and murdering pedophiles at worst.

  • BrianTheeBiscuiteer@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    AI: I’m so sorry. You’re correct I violated protocol. I’ll make a note of this so it won’t happen again.

    Nurse: You gave my 5 year old patient 5000cc of morphine!

  • PointyFluff@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    First of all. BULLSHIT. Second. why would you give a bot write-access to your filesystem.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      6 days ago

      The idea is you give it shell access. Say use super coder agent bob johnson to write a thing that does x using this [framework], separate files by best practice for x y and z features, ask security agent OSO to look over the code and suggest changes, ask agent U.N.I.T to make unit tests, when the code looks good, run through the unit tests. If anything fails keep fixing and iterating until every thing passes. Create a README.MD for everything that was done, Create a TODO.MD for any future suggestions.

      I’m simplifying, but this actually works to an extent. Each of the agents keep the context windows small, the whole thing stays sane and eventually nets some project that works. The downside is you end up giving it quite a bit of leeway to get the job done or you sit over it watching and authorizing it’s every move.

      Kinda strange to see a safety director do that…

      • BJW@lemmus.org
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        5 days ago

        You should avoid the FuckAI community - they hate hearing that this application of the technology is wholly viable. To them, it’s only capable of creating crap, and to suggest otherwise is to be buried in a mountain of down votes. I was actually surprised you had a positive reaction, until I realized this is the Technology community.

  • renzhexiangjiao@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    you can like… enforce this rule programatically? you don’t have to say “pretty please” to ai? basically, when AI requests some potentially unwanted thing (like deleting an email), this request goes through a proxy that asks the human for confirmation. Also you can have a safe word set up in the chat interface to act as a killswitch. I thought these are ABCs of ai safety but apparently these are foreign concepts to this “safety director”

    • zqps@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      The people who internalize this would never engage with a chatbot in this way in the first place. To them this is another intelligence they’re conversing with, where you get what you need by following social decorum, and enforcing your will amounts to abuse.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    How come some 25yo person is a director at Facebook?

    I mean, even if she is a child prodigy genius, which she obviously is not as she is face first fist deep into AI, how the frack do you have even enough life experience to become a director of any large organization at that age unless you somehow cheated your way in?

    Then reading the hat she’s doing and how she resolved it tells me she doesn’t know shit about computers, she just know how to type commands into AI systems

    Is this the future? Am I going to end up being one of those long bearded magicians that still know the old technology, that still can still save the day by using shell commands?

    • Rimu@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      They need to have some kind of AI safety team, as a fig leaf. But they don’t don’t want it to slow them down so they make sure it’s incompetent and ineffective.

      Just a theory.

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      Don’t American companies give a loooot of people director or executive director titles just because it sounds impressive? In roles where you gotta talk to corporate customers at least

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    If I was the director of AI safety, and I used AI to own and delete my inbox, I sure as shit would never tell a soul.

    This is pure unbridled incompetence.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      If I was a director of AI safety I wouldn’t let openclaw within 100feet of anything. Let alone my work machine.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 days ago

      The whole “AI safety” field is this incompetent. These people that will tell you AI is on the verge of creating a bioweapon, and then run random code in a command line. Completely and totally unserious.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        I don’t know what the hell has happened, but some of these people are basically human jellyfish. Big tech is full of them now.

        No thought enters their mind, but they dodge the layoffs and the PIPs and get promoted like this.

        I don’t fucking get it.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    If all the qualifications I need to be a security engineer for Facebook are

    • buy a Mac Mini
    • don’t configure remote access
    • install untrusted software
    • leave

    Then Facebook should hire me. I’ll buy so many Mac Minis on their dime. I will run so many crazy things.

  • borth@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb

    Nothing humbles you like that?

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 days ago

      I’ve got a suggestion for her:

      Burn all your money and ids and property, become homeless.

      That will humble you.

    • andyburke@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      Because we have let the clowns be in charge and the stock market is full of monopolistic shitshows instead of actual competition.