• Paddy66@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    I would assume that Anthropic’s stance is mostly performative. But while people are in boycotting mood they could solve the surveillance problem by quitting ALL big tech products. Here’s our site that lists all the ethical, non-spyware alternatives:

    https://www.rebeltechalliance.org/stopusingbigtech.html

    (Please share with your friends and family - we have zero marketing budget - thank you!)

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Yeah instead of arguing over whether Anthropic is actually good, let’s unite around “fuck OpenAI.”

    • architect@thelemmy.club
      link
      fedilink
      English
      arrow-up
      2
      ·
      24 hours ago

      Actually it’s so they have plausible deniability if they “accidentally” kill a bunch of people that just so happens to be a group they openly despise.

      I think that’s way way worse but

  • JigglypuffSeenFromAbove@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    From OpenAI’s statement:

    We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:

    • No use of OpenAI technology for mass domestic surveillance.

    • No use of OpenAI technology to direct autonomous weapons systems.

    • No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).

    It specifically states their AI can’t/won’t be used for surveillance and autonomous weapons. Of course I’m not saying I trust them, but isn’t this the same thing Anthropic says they’re against? What’s the difference here or what did I miss?

    • flamingleg@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      the ‘no domestic surveillance’ is just language that mirrors some limitations (from their pov) from the patriot act. They’re still willing to surveil people outside the USA, and in fact all they have to do is route domestic traffic through an international part of a network and they can legally spy on domestic americans which is what already happens.

    • muusemuuse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Anthropic put clauses in that were legally enforceable by future administrations. OpenAI says “yea we totally trust you bro”

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Since this article, Anthropic’s Claude AI app has claimed the #1 top spot over ChatGPT on both Android and iOS.

  • StopTech@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    The “Cancel ChatGPT movement” doesn’t appear to be mentioned in the article, but other outlets say hashtags like #CancelChatGPT are trending on X.

  • lumettaria@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Now imagine my shock when I had done the swap from ChatGPT to Claude the day before the news about Anthropic’s (now backpedalled) deal. Anyway, I deleted ChatGPT and Gemini accounts and degoogled my life while I was at it.

  • pnelego@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I’m wondering if this is a play for a future bailout. OpenAI knows they are fucked; and instead of just going away like most companies do when they fail, they are embedding themselves in the government to secure a bailout under the guise of a critical defence vendor.

    Furthermore, I’m not convinced the researchers and critical personnel will work for a company that does this. I think we’re about to see the biggest jumping of a ship so far in the industry.

  • m3t00🌎🇺🇦@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 days ago

    hmm, not to sound smug but, you people need a hashtag to tell which way the wind blows. ‘going viral’ is such a ‘breaking news’ clickbait. windows users need to get a clue, that part is true. i kicked my brother out of my house for kept tryting to show me how easy ai made art. is os political now? you can guess his other issues. Rambling now

  • raskal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Canada recently has had its 2nd worst school shooting ever. The killer had many interactions with ChatGPT that warranted banning her account. A whistleblower has claimed that they wanted to inform Canada’s police force of these comments but were denied by ChatGPT’s management.

    They had a chance to stop the death of 8 people, most of which were young children, but failed to do anything.

    FUCK CHATGPT AND THOSE BASTARDS THAT RUN IT

    • jagungal@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Why would you not contact police? I understand that this is a systemic failure and blame does not lie with that employee but if others me I’d rather be out of a job than have those deaths on my conscience for the rest of my life.

      • Takios@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        It’s probabilities. If you report it you’re 100% out of a job but only maybe prevented something bad from happening. If you don’t report, you keep your job but maybesomething bad happens. Reliance on a job for survival shifts the decision even further to taking the course of action that’ll keep you your job.

      • Kissaki@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 days ago

        In my eyes some blame does lie with them. A systematic failure is a failure of many parts. An employee taking notice and following bad instructions is one of them.

        I don’t know what information they had, but if they were at the point of intending to share, it seems like whistleblowing would have been the just and moral thing to do even if it means ignoring immediate authoritative structure.

  • perishthethought@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    mainstream

    I’ll believe that when my sisters start saying this. Till then, it’s just us privacy fans screaming in a dark cave, enjoying the echo.

    • criscodisco@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 days ago

      I had a coworker tell me how cool Copilot was because he asked it a question and it found the answer in an email in his outlook mailbox. I thought, “you needed AI to search your email?”

      We are probably cooked.

  • trackball_fetish@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    Anyone stockpiling ai prompt vulnerabilities for when we’ll eventually need them to fight off some deathbots?

    • ILikeBoobies@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      A machine is more expensive and less expendable than a human. You don’t need to worry about killbots.

      • bearboiblake@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        Sorry, but this is a stupid take. Humans can refuse to fire on a crowd of innocent people. Killbots cannot. The unquestioning loyalty is worth more than money can buy.

          • ArmchairAce1944@discuss.online
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            The reason why shooting people was too difficult is because many of the einsatzgruppen members broke down psychological and some became so murderous that they might not have been fit to reenter civilian society. They used gas chambers because it was sufficiently distanced from the actual act of killing (it just involved rounding people up into a room and having some guy with a canister dump the stuff into a vent. None of the actual killers even had to see the results of their actions as the cleaning was done by another group) that they could do it without creating that same problem.

    • qualia@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      Mass surveillance for advertising seems marginally more benign than mass surveillance by one’s own government, personally. Though admittedly both are bad.

      Edit: I can find alternatives for most of Google’s ecosystem but mapping out accurate bus routes is terrible via OSM/OsmAnd or Organic Maps. Anyone have any tips there?

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Sam Altman is just some fail upward money guy, he’s been eventually removed from basically every prior position he has held.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        The more I learn about this guy, the more amazed I am that his staffers stood up for him when he got fired. I guess they just hated the board more.