US immigration enforcement used an AI-powered tool to scan social media posts “derogatory” to the US | “The government should not be using algorithms to scrutinize our social media posts”::undefined

  • ChonkyOwlbear@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    6
    ·
    1 year ago

    At the same time, whenever there is a mass shooting where the killer posted their intent online, people always say “why weren’t the authorities paying attention”.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      edit-2
      1 year ago

      The problem is false positive and negative rates.

      We’re on track for some 600-700 mass shooters this year.

      The US has 300 million social media users.

      So in a given year, 0.00023% of social media users will turn out to be mass shooters.

      So even if we had an algorithm that was 99.99% accurate at identifying a potential mass shooter from social media, we’d still have a less than 1% chance of correctly identifying a mass shooter from social media posts.

      So what’s the cost of false positives? Do people flagged by such a system get harassed by law enforcement? If they are sovereign citizen type gun nuts or paranoid schizophrenics, does the additional law enforcement attention potentially instigate shootings or standoffs that wouldn’t have otherwise occurred at a higher rate than the successful prevention of mass shootings?

      And what’s the false negative rate? Because if only a small number of mass shooters are correctly identified by such an algorithm at a high rate of false positives but a majority of shooters actually slip through the cracks as false negatives, there too is the potential for overreliance on an algorithm to harm progress towards alternative solutions (such as advancing legislation banning firearm possession for people with mental health issues).

      AI analysis of social media combined with other data sources becomes a more appropriate tool in a situation like “we have three suspects based on multiple other factors for who is an active shooter - did any of the three have a recent stressor in their life such as a job loss?” In that case an 80% correct model could be quite helpful.

    • Dave.
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      1 year ago

      I kind of feel that trawling social media looking for the words of potential mass shooters isn’t going to be the thing that solves - or even slows down - the mass shooting problem that the USA has.

      • Corkyskog@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I think there is a huge difference between just scanning publicly available text posted to social media in general rather than immigration focus. A lot of these shooters post very public manifestoesque type comments, friends and families have even called the police in some cases and they take no action. It feels like the police actively ignore this stuff just to be able to shrug and protect 2a.

        A number of these could have easily been stopped.

        • Dave.
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          friends and families have even called the police in some cases and they take no action. It feels like the police actively ignore this stuff

          I’m going to be a little glib here : Just fix this part and you won’t need to scan social media posts.

          Also, once this is in place you’ll find that the majority of perpetrators - the ones who plan things out - won’t post super incriminating things beforehand and their generally-disturbed posts will be lost in the sea of general discontent flagged by an algorithm trying to sift the wheat from the chaff.