• Ilandar
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    I don’t understand how this censorship is any different from what we look down upon authoritarian countries.

    The scope and nature of the content being censored, I guess. But you’re right that there is the potential of setting a dangerous precedent when taking this approach to online safety regulation. I think in general the saga has highlighted the problematic nature of social media becoming so intertwined with society. There is a real risk for this stuff to be viewed unintentionally, or because it was recommended through an algorithmic feed, and served to a considerably larger number of people than if it was only available on LiveLeak or something back in the day. It’s so difficult to effectively regulate these social media companies now because they have become part of mainstream society and gained so much power as a result. We are essentially just relying on goodwill on the part of the people running them.

      • Ilandar
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        But in this specific case if they blurred out the content and put a warning: “This post contains graphic content, do you wish to view it?”. Or perhaps we could use AI to give a description so people know what they’re getting into. There’s nothing wrong with that, and I don’t know why that isn’t good enough.

        I don’t think warnings are good enough if the content is being delivered automatically into people’s feeds. People are not really thinking rationally when they are doom-scrolling on social media. Not to mention that text descriptions are not always adequate preparation for extreme content, particularly with social media minimum age limits as low and as unenforced as they are.