• keepthepace@slrpnk.net
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    If you ask a LLM about how to best genocide and extend territory, in the end you will manage even if it takes some “jailbreaking” prompts.

    This is a far cry from the claim of the title: “AI chatbots tend to choose violence and nuclear strikes in wargames”. They will do so if asked to do so.

    Give an AI the rules of starcraft and it will suggest to kill civilians and use nukes because these are sound strategies within the given framework.

    scary data in scary actions out

    You also need a prompt, aka instructions. You choose if you tell it to make the world more scary or less scary.