• BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    Also, this is just an impromptu addendum to my extended ramble on the AI bubble crippling tech’s image, but I can easily see military involvement in AI building public resentment/stigma against the industry further.

    Any military use of AI is already gonna be seen in a warcrimey light thanks to Israel using it in their Gaza Geneva Checklist Speedrun - add in the public being fully aware of your average LLM’s, shall we say, tenuous connection to reality, and you have a recipe for people immediately assuming the worst.

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      That was the current example we were thinking of, though we did look up war crimes law thinking on the subject tl;dr you risk war crimes if there isn’t a human in the loop. e.g., think of a minefield as the simplest possible stationary autonomous weapon system, the rest is that with computers.

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        As a personal sidenote, part of me says the “Self-Aware AI Doomsday” criti-hype might end up coming back to bite OpenAI in the arse if/when one of those DoD tests goes sideways.

        Plenty of time and money’s been spent building up this idea of spicy autocomplete suddenly turning on humanity and trying to kill us all. If and when one of those spectacular disasters you and Amy predicted does happen, I can easily see it leading to wild stories of ChatGPT going full Terminator or some shit like that.