• rottingleaf@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    35
    ·
    6 months ago

    The whole problem with shadowbans is that they are not very easy to prove (without cooperation from Meta). One can be shadowbanned from one area (by geolocation), but not from another. One can be shadowbanned for some users but not for other. The decisions here can be made based on any kind of data and frankly Meta has a lot to make it efficient and yet hard to prove.

    Shadowbans should just be illegal as a thing, first, and second, some of the arguments against him from the article are negligible.

    I just don’t get you people hating him more than the two main candidates. It seems being a murderer is a lesser problem than being a nutcase for you.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      6 months ago

      Shadowbans should just be illegal as a thing

      I bet you scream about your first amendment rights being violated whenever a moderator deletes your posts.

          • rottingleaf@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            6 months ago

            Oh, if this is not a figure of speech, then how much was your bet? I accept BTC (being in a sanctioned country and all that).

            Mine was, of course, this is not worth a penny to me, I already know your measure.

            • Flying Squid@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              6 months ago

              If you would bet nothing, I guess you don’t actually believe your own words.

              Thanks for admitting what you said was false. I think we can move on now.

              • rottingleaf@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                If you would bet nothing, I guess you don’t actually believe your own words.

                There are a few factors, one of them is your value as a person.

                Thanks for admitting what you said was false.

                Why would you say that if that’s false?

      • Buttons@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        A problem is that social media websites are simultaneously open platforms with Section 230 protections, and also publishers who have free speech rights. Those are contradictory, so which is it?

        Perhaps @rottingleaf was speaking morally rather than legally. For example, I might say “I believe everyone in America should have access to healthcare”; if you respond “no, there is no right to healthcare” you would be right, but you missed my point. I was expressing an moral aspiration.

        I think shadowbans are a bad mix of censorship and hard to detect. Morally, I believe they should be illegal. If a company wants to ban someone, they can be up front about it with a regular ban; make it clear what they are doing. To implement this legally, we could alter Section 230 protections so that they don’t apply to companies performing shadowbans.

    • teft@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      6 months ago

      Shadowbans help prevent bot activity by preventing a bot from knowing if what they posted was actually posted. Similar to vote obfuscation. It wastes bot’s time so it’s a good thing.

      • kava@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        I’ve seen reddit accounts who regularly posted comments for months all at +1 vote and never received any response or reply at all because nobody had ever seen their comments. They got hit with some automod shadowban they were yelling into the void, likely wondering why nobody ever felt they deserved to be heard.

        I find this unsettling and unethical. I think people have a right to be heard and deceiving people like this feels wrong.

        There are other methods to deal with spam that aren’t potentially harmful.

        There’s also an entirely different discussion about shadowbans being a way to silence specific forms of speech. Today it may be crazies or hateful speech, but it can easily be any subversive speech should the administration change.

        I agree with other commenter, it probably shouldn’t be allowed.

        • teft@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          I’ve seen reddit accounts who regularly posted comments for months all at +1 vote and never received any response or reply at all because nobody had ever seen their comments.

          Then how did you see them?

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          I think people have a right to be heard

          You are wrong. You have no right to a voice on a private platform.

          • Buttons@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            Maybe he was speaking morally rather than legally.

            For example, if I said “I believe people have a right to healthcare”, you might correctly respond “people do not have a legal right to healthcare” (in America at least). But you’d be missing the point, because I’m speaking morally, not legally.

            I believe, morally, that people have a right to be heard.

          • UnderpantsWeevil@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            6 months ago

            This just means privatizing public spaces becomes a method of censorship. Forcing competitors farther and farther away from your captured audience, by enclosing and shutting down the public media venues, functions as a de facto media monopoly.

            Generally speaking, you don’t want a single individual with the administrative power to dictate everything anyone else sees or hears.

            • Flying Squid@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              So if I own a cafe and I have an open mic night and some guy gets up yelling racial epithets and Nazi slogans, it’s their right to be heard in my cafe and I am just censoring them by kicking them out?

              As the one with the administrative power, should I put it up to a vote?

              • UnderpantsWeevil@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                6 months ago

                So if I own a cafe

                More if you own Ticketmaster, and you decide you’re going to freeze out a particular artist from every venue you contact with.

                And yes. Absolutely censorship.

                • Flying Squid@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  6 months ago

                  Changing the scenario doesn’t answer my question.

                  I came up with a scenario directly related to your previous post.

                  I can only imagine you are changing the scenario because you realize what I said makes what you said seem unreasonable.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        6 months ago

        Shadowbans help prevent bot activity by preventing a bot from knowing if what they posted was actually posted

        I have not seen anything to support the theory that shadowbans reduce the number of bots on a platform. If anything, a sophisticated account run by professional engagement farmers is going to know it’s been shadowbanned - and know how to mitigate the ban - more easily than an amateur publisher producing sincere content. The latter is far more likely to run afoul of an difficult-to-detect ban than the former.

        It wastes bot’s time

        A bot has far more time to waste than a human. So this technique is biased against humans, rather than bots.

        If you want to discourage bots from referencing their own metrics, put public metrics behind a captcha. That’s far more effective than undermining visibility in a way only a professional would notice.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        6 months ago

        It wastes shadowbanned person’s time, so it’s not.

        Similar to vote obfuscation.

        Which sucks just as badly.

          • rottingleaf@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            6 months ago

            That’s a good solution for you, but some of us don’t generally bend over to assholes.

            And that’s not serious. You’ll get shadowbanned for any kind of stuff somebody with that ability wants to shadowban you for. You won’t know the reason and what to avoid.

            I got shadowbanned on Reddit a few times for basically repeating the 1988 resolution of the European Parliament on Artsakh (the one in support of reunification with Armenia).

            • teft@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              Don’t hang out in spaces that don’t align with your beliefs.

              I was on reddit for 15 years and never caught a ban and I’m not exactly a demure person. If you go to an anti vax thread (this is an example since i know nothing of armenia) and post stuff about vaccination, even it’s 100% factual, it’s not surprising when you catch a ban.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        6 months ago

        Because a good person would never need those. If you want to have shadowbans on your platform, you are not a good one.

        A bit like animal protection, while animals can’t have rights balanced by obligations, you would want to keep people cruel to animals somewhere where you are not.

        • hedgehog@ttrpg.network
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Because a good person would never need those. If you want to have shadowbans on your platform, you are not a good one.

          This basically reads as “shadow bans are bad and have no redeeming factors,” but you haven’t explained why you think that.

          If you’re a real user and you only have one account (or have multiple legitimate accounts) and you get shadow-banned, it’s a terrible experience. Shadow bans should never be used on “real” users even if they break the ToS, and IME, they generally aren’t. That’s because shadow bans solve a different problem.

          In content moderation, if a user posts something that’s unacceptable on your platform, generally speaking, you want to remove it as soon as possible. Depending on how bad the content they posted was, or how frequently they post unacceptable content, you will want to take additional measures. For example, if someone posts child pornography, you will most likely ban them and then (as required by law) report all details you have on them and their problematic posts to the authorities.

          Where this gets tricky, though, is with bots and multiple accounts.

          If someone is making multiple accounts for your site - whether by hand or with bots - and using them to post unacceptable content, how do you stop that?

          Your site has a lot of users, and bad actors aren’t limited to only having one account per real person. A single person - let’s call them a “Bot Overlord” - could run thousands of accounts - and it’s even easier for them to do this if those accounts can only be banned with manual intervention. You want to remove any content the Bot Overlord’s bots post and stop them from posting more as soon as you realize what they’re doing. Scaling up your human moderators isn’t reasonable, because the Bot Overlord can easily outscale you - you need an automated solution.

          Suppose you build an algorithm that detects bots with incredible accuracy - 0% false positives and an estimated 1% false negatives. Great! Then, you set your system up to automatically ban detected bots.

          A couple days later, your algorithm’s accuracy has dropped - from 1% false negatives to 10%. 10 times as many bots are making it past your algorithm. A few days after that, it gets even worse - first 20%, then 30%, then 50%, and eventually 90% of bots are bypassing your detection algorithm.

          You can update your algorithm, but the same thing keeps happening. You’re stuck in an eternal game of cat and mouse - and you’re losing.

          What gives? Well, you made a huge mistake when you set the system up to ban bots immediately. In your system, as soon as a bot gets banned, the bot creator knows. Since you’re banning every bot you detect as soon as you detect them, this gives the bot creator real-time data. They can basically reverse engineer your unpublished algorithm and then update their bots so as to avoid detection.

          One solution to this is ban waves. Those work by detecting bots (or cheaters, in the context of online games) and then holding off on banning them until you can ban them all at once.

          Great! Now the Bot Overlord will have much more trouble reverse-engineering your algorithm. They won’t know specifically when a bot was detected, just that it was detected within a certain window - between its creation and ban date.

          But there’s still a problem. You need to minimize the damage the Bot Overlord’s accounts can do between when you detect them and when you ban them.

          You could try shortening the time between ban waves. The problem with this approach is that the ban wave approach is more effective the longer that time period is. If you had an hourly ban wave, for example, the Bot Overlord could test a bunch of stuff out and get feedback every hour.

          Shadow bans are one natural solution to this problem. That way, as soon as you detect it, you can prevent a bot from causing more damage. The Bot Overlord can’t quickly detect that their account was shadow-banned, so their bots will keep functioning, giving you more information about the Bot Overlord’s system and allowing you to refine your algorithm to be even more effective in the future, rather than the other way around.

          I’m not aware of another way to effectively manage this issue. Do you have a counter-proposal?

          Out of curiosity, do you have any experience working in content moderation for a major social media company? If so, how did that company balance respecting user privacy with effective content moderation without shadow bans, accounting for the factors I talked about above?

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      6 months ago

      Shadowbans should just be illegal as a thing

      I mean, regional coding makes sense from a language perspective. I don’t really want to see a bunch of foreign language recommendations on my feed, unless I’m explicitly searching for content in that language.

      But I do agree there’s a lack of transparency. And I further agree that The Algorithm creates a rarified collection of “popular” content entirely by way of excluding so much else. The end result is a very generic stream of crap in the main feed and some truly freaky gamed content that’s entirely focused on click-baiting children. Incidentally, jesus fucking christ whomever is responsible for promoting “unboxing” videos should be beaten to death with a flaming bag of nalpam.

      None of this is socially desirable or good, but it all appears to be incredibly profitable. Its a social media environment that’s converged on “Oops! All Ads!” and is steadily making its way to “Oops! All scams!” as the content gets worse and worse and worse.

      The shadowbanning and segregation of content is just a part of the equation that makes all this possible. But funneling people down into a handful of the most awful, libidinal content generators is really not good.