The eSafety commissioner will abandon its legal case against Elon Musk's X to have graphic footage of a terrorist stabbing removed from the social media platform.
But in this specific case if they blurred out the content and put a warning: “This post contains graphic content, do you wish to view it?”. Or perhaps we could use AI to give a description so people know what they’re getting into. There’s nothing wrong with that, and I don’t know why that isn’t good enough.
I don’t think warnings are good enough if the content is being delivered automatically into people’s feeds. People are not really thinking rationally when they are doom-scrolling on social media. Not to mention that text descriptions are not always adequate preparation for extreme content, particularly with social media minimum age limits as low and as unenforced as they are.
I don’t think warnings are good enough if the content is being delivered automatically into people’s feeds. People are not really thinking rationally when they are doom-scrolling on social media. Not to mention that text descriptions are not always adequate preparation for extreme content, particularly with social media minimum age limits as low and as unenforced as they are.