Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    Ā·
    edit-2
    15 hours ago

    Fellas, 2023 called. Dan (and Eric Schmidt wtf, Sinophobia this man down bad) has gifted us with a new paper and let me assure you, bombing the data centers is very much back on the table.

    "Superintelligence is destabilizing. If China were on the cusp of building it first, Russia or the US would not sit idly byā€”theyā€™d potentially threaten cyberattacks to deter its creation.

    @ericschmidt @alexandr_wang and I propose a new strategy for superintelligence. šŸ§µ

    Some have called for a U.S. AI Manhattan Project to build superintelligence, but this would cause severe escalation. States like China would noticeā€”and strongly deterā€”any destabilizing AI project that threatens their survival, just as how a nuclear program can provoke sabotage. This deterrence regime has similarities to nuclear mutual assured destruction (MAD). We call a regime where states are deterred from destabilizing AI projects Mutual Assured AI Malfunction (MAIM), which could provide strategic stability. Cold War policy involved deterrence, containment, nonproliferation of fissile material to rogue actors. Similarly, to address AIā€™s problems (below), we propose a strategy of deterrence (MAIM), competitiveness, and nonproliferation of weaponizable AI capabilities to rogue actors. Competitiveness: China may invade Taiwan this decade. Taiwan produces the Westā€™s cutting-edge AI chips, making an invasion catastrophic for AI competitiveness. Securing AI chip supply chains and domestic manufacturing is critical. Nonproliferation: Superpowers have a shared interest to deny catastrophic AI capabilities to non-state actorsā€”a rogue actor unleashing an engineered pandemic with AI is in no oneā€™s interest. States can limit rogue actor capabilities by tracking AI chips and preventing smuggling. ā€œDoomersā€ think catastrophe is a foregone conclusion. ā€œOstrichesā€ bury their heads in the sand and hope AI will sort itself out. In the nuclear age, neither fatalism nor denial made sense. Instead, ā€œrisk-consciousā€ actions affect whether we will have bad or good outcomes."

    Dan literally believed 2 years ago that we should have strict thresholds on model training over a certain size lest big LLM would spawn super intelligence (thresholds we have since well passed, somehow we are not paper clip soup yet). If all it takes to make super-duper AI is a big data center, then how the hell can you have mutually assured destruction like scenarios? You literally cannot tell what they are doing in a data center from the outside (maybe a building is using a lot of energy, but not like you can say, ā€œoh they are running they are about to run superintelligence.exe, sabotage the training runā€ ) MAD ā€œworksā€ because itā€™s obvious the nukes are flying from satellites. If the deepseek team is building skynet in their attic for 200 bucks, this shit makes no sense. Ofc, this also assumes one side will have a technology advantage, which is the opposite of what weā€™ve seen. The code to make these models is a few hundred lines! There is no moat! Very dumb, do not show this to the orangutan and muskrat. Oh wait! Dan is Muskyā€™s personal AI safety employee, so I assume this will soon be the official policy of the US.

    link to bs: https://xcancel.com/DanHendrycks/status/1897308828284412226#m

    • raoul@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      2 hours ago

      Mutual Assured AI Malfunction (MAIM)

      The proper acronym should be Mā€™AAM. And instead of a ā€˜roman salutā€™ they can tip their fedora as a distinctive sign šŸ¤·ā€ā™‚ļø

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      7 hours ago

      I guess now that USAID is being defunded and the government has turned off their anti-russia/china propaganda machine, private industry is taking over the US hegemony psyop game. Efficient!!!

      /s /s /s I hate it all

      • aninjury2all@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        2 hours ago

        If theyā€™re gonna fearmonger can they at least be creative about it?!?! Everyoneā€™s just dusting off the mothballed plans to Quote-Unquote ā€œconfrontā€ Chy-na after a quarter-century detour of fucking up the Middle East (moreso than the US has done in the past)