• Zedstrian@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    1
    ·
    21 hours ago

    When you can’t trust that the votes, the comments, and the engagement you’re seeing are real, you’ve lost the foundation a community platform is built on.

    Reddit and Twitter are filled to the brim with spambots and remain successful. The lack of distinction between real and fake content serves to attract marketers and propagandists to such platforms, with most users remaining due to the network effect. With its venture capitalist funding, Digg would be just as willing to benefit from spam if it held market dominance, and thus only distributed Fediverse platforms like Lemmy or Mastodon are viable solutions.

    • Zephorah@discuss.online
      link
      fedilink
      English
      arrow-up
      21
      ·
      20 hours ago

      I realize younger people probably don’t feel this so viscerally, but shorts (not all, but many) are very in tune with old TV advertising format. It’s like an endless stream of Super Bowl ads, at best. Repetitive music. Designed for the short attention span. Makes you seek a product, in this case, more of itself.

      Now, look at the “upcycled” (/s) version of YouTubr content. Reused video clips with a shiny, hyper-reactive talking head in front of it. Not human expression but caricatures thereof. Millions of views. Millions of viewers. For years. Not of human faces but caricatures of human faces. This garbage won’t go away because it’s consistantly being watched.

      Now, after all that priming, introduce AI into the two most popular social medias, short form and long form.

      How does this fully primed crowd know the difference? How would they suddenly feel the need to leave? Not you or me, but the people who consume the ad clones and charicaturized crap daily? The same people who slide their phones out of their pockets to scroll shorts, on automatic, whenever they have 5 free minutes at work. How do they even spot the difference after years of consuming garbage?

      TLDR Less human interaction + more fake, caricaturized human vid content = where we are now, with AI on social media.

      • fahfahfahfah@lemmy.billiam.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        20 hours ago

        Other than ads, I think we’ve had a lot of “shorts” style content that people gravitated to in “the old days”. Things like AFV, whose line, QI, basically anything where it’s not a complete consistent show but a bunch of smaller capsulated segments.

    • micka190@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      15 hours ago

      Reddit and Twitter are filled to the brim with spambots and remain successful.

      Just because it’s where all the users already are. You couldn’t start Reddit today, it’d immediately get spammed by AI bots and no one would stick around.

      Hell, Reddit’s API changes had a noticeable impact on most text-only subreddits I was a part of, and then the AI content just made a lot of the remaining ones die off. No one’s rushing to Lemmy to fill those niches. They’re just not participating in them online, instead.

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    16 hours ago

    We have a weapon in the fight against the bots. Return to the roots of the web. Simple, static pages. Decentralized. Interactivity only possible for most at small scale.

    • LedgeDrop@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      ·
      15 hours ago

      I don’t think it’s that easy.

      Interactivity only possible for most at small scale.

      You’re overlooking the real OG of the internet: usenet, irc and bulletin board systems (bbs).

      The internet has always needed an “easy access” place to communicate, ask questions, or joke around - with a broad audience from around the world.

      Of course, gopher, ftp, and http - did exactly as you said: serve static content.

      But the internet has always needed a place for “dynamic” conversation and it’s these places that are overran with bots.

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        F2F might help against bots. “From around the world” becomes harder to achieve, though. Almost requires people traveling, making friends and exchanging QR codes offline.

        Because a real living person standing before you is about the only way to know.

  • comador @lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    edit-2
    20 hours ago

    TL;DR:

    AI bots and AI agents destroyed it.

    Sincerely a real problem and I as a sysadmin for various www sites: I loathe them daily.

    If Cisco, F5 etc could invent a way to block these bots at the firewall and load balancer level, they’d make billions.

    • Skavau@piefed.social
      link
      fedilink
      English
      arrow-up
      24
      ·
      21 hours ago

      Indeed, but zero mod tools other than “delete post” 2 months in was genuinely laughable. To be frank, it should’ve launched with proper moderation: delete posts, ban users, sticky posts, filters for post-types etc. This is standard stuff that users shouldn’t even have to haggle for.

      If they gave community moderators proper tools to help them here and put up walls - they could’ve mitigated a lot of this.

      • Otter@lemmy.ca
        link
        fedilink
        English
        arrow-up
        23
        ·
        21 hours ago

        From what I remember, they were going to “use AI” to handle moderation. It felt like a grift from the beginning

        • Skavau@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          21 hours ago

          A Reddit-styled site where AI handles community moderator decisions isn’t reddit. Communities aren’t communities, they’re just hashtags.

      • 🌞 Alexander Daychilde 🌞@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        They certainly didn’t have enough coders for the project. It needed a hell of a lot more features more quickly.

        It also didn’t take off with users. Maybe because of the features, maybe just standard network effects, hard to say.

        I believe bots were part of the failure, but I don’t think that was the whole reason at all. I think that’s the part of the reason they thought they’d focus on.

        It was not a successful site.

        • Skavau@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          18 hours ago

          I think you can reasonably blame the lack of features here, honestly. I’m not saying if they had them they would have challenged Reddit, but they’d have been much more active. Community moderators almost certainly lost interest when they realised they had no real control over their community, and the longer the time elapsed with no tools to do so - the more drifted away leaving abandoned communities where AI and bots and trolls move in - compounding it even further.

          They also, on day 1 of their community launch, allowed day 1 old accounts to make communities. Even if each account could only moderate 2 communities, that wasn’t smart at all.

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      21 hours ago

      You can do that! You just have to block known cloud service providers, known scraper ASNs, and while this is not at the firewall level, a captcha or other challenge like cloudflare or anubis.

    • dustycups@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      Yeah when they say move fast and break things it feels like there should be a third step.

  • danglybits27@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    20 hours ago

    Lol who could’ve seen it coming? From an article almost exactly 2 months ago on the launch:

    “They’re betting that AI can help to address some of the messiness and toxicity of today’s social media landscape. At the same time, social platforms will need a new set of tools to ensure they’re not taken over by AI bots posing as people.”

    https://techcrunch.com/2026/01/14/digg-launches-its-new-reddit-rival-to-the-public/

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    20 hours ago

    We faced an unprecedented bot problem

    When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority. Within hours, we got a taste of what we’d only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn’t appreciate the scale, sophistication, or speed at which they’d find us. We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can’t trust that the votes, the comments, and the engagement you’re seeing are real, you’ve lost the foundation a community platform is built on.

    This isn’t just a Digg problem. It’s an internet problem. But it hit us harder because trust is the product.

    It’s a social media problem. It’s going to be hard to provide pseudonymity, low-cost accounts relatively freely, and counter bots spamming the system to manipulate it. The model worked well in an era before there were very human-like bots that were easy to produce.

    It might be possible to build webs of trust with pseudonyms. You can make a new pseudonym, but the influence and visibility gets tied to, for example, what users or curators that you trust trust, so the pseudonym has less weight until it acquires reputation. I do not think that a single global trust “score” will work, because you can always have bot webs of trust.

    Unfortunately, the tools to unmask pseudonyms are also getting better, and throwing away pseudonyms occasionally or using more of them is one of the reasonable counters to unmasking, and that doesn’t play well with relying more on reputation.

    • CarbonIceDragon@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 hours ago

      Im beginning to think that, as annoying for users and difficult to build a userbase for as it may be, the answer might ultimately have to be for future social sites to charge people for use in some way, be it to create accounts or as a subscription or just for the ability to post/comment/vote or whatever. If it’s no longer going to be feasible to keep bots out, and there’s a financial gain for their use, then they’re going to get used, so at that point it has to be somehow more expensive to run a bot than that bot can be expected to bring in as a result of it’s contribution to an advertising or manipulation campaign, to deter them. On the bright side, I guess it might lead to a shift away from advertising everywhere. Either you charge people and therefore dont need ads, or you dont, and have most of your ads being “seen” by bots, which advertisers probably don’t want to spend money to reach anyway.

      • Furbag@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 hours ago

        I had an idea pop in my head and I don’t know if it’s feasible or not, but maybe the next nascent social media network can try it out, who knows.

        Private trackers for torrenting are notoriously hard to get invitations to, specifically because the only way to get in is through joining the community early, limited time windows to register, or some sort of lottery system, but most people get in when one of their friends sends them one of their limited number of invitations - which they don’t do lightly because if you invite a leecher it harms your reputation and both you and the person you invited can get banned even if you are still maintaining a positive ratio.

        So what if we implemented a similar kind of system? Bots can’t flood a system if the registration is closed access, but regular people can still get invitations from friends and family. But if you invite a bot, that bot account and the account that invited it get terminated simultaneously, taking out two bad actors for the price of just one. Heck, if you really wanted to go scorched earth, every account that was registered via an invitation from the person who initial invited the bot should also get terminated. Know who you are inviting and you won’t have any problems, but if you use your ability to invite people recklessly, your entire social network gets kicked off the platform.

        This would probably never get implemented in a serious social media platform because those spaces rely on explosive growth to compete with other more established networks, and limiting the number of users is counterproductive - I think at some point investors would just start telling management to open the floodgates and let the bots in already.

  • RandomDude@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    20 hours ago

    Sad to see, but I never really used it. I don’t even know how they can combat this. The amount of bots/AI accounts everywhere is unprecedented.

  • FistingEnthusiast@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    20 hours ago

    It was so shit anyway

    The amount of racism and bigotry that was tolerated was fucking wild

    It seemed like every disgusting person there wanted to turn it into some right-wing safe haven

    The bitching about reddit being “leftist” was hilarious

    Reddit is definitely not leftist at all, but they’re so far right (and determined to be victims) that they have no clue what they are talking about

    • pHr34kY@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      When it’s overrun by bots, the only valid complaint about the content is that it was generated by bots.

      Whether the bots are bigoted or racist doesn’t matter.

  • Brickfrog@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    20 hours ago

    I guess using AI to moderate AI and bots wasn’t working out.

    Maybe they’ll pivot to being a site similar to Moltbook, just bots moderating other bots that are conversing with each other. Sounds like they were almost there.