• 2 Posts
  • 78 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle

  • I feel like some of the doomers are already setting things up to pivot when their most major recent prophecy (AI 2027) fails:

    From here:

    (My modal timeline has loss of control of Earth mostly happening in 2028, rather than late 2027, but nitpicking at that scale hardly matters.)

    It starts with some rationalist jargon to say the author agrees but one year laterā€¦

    AI 2027 knows this. Their scenario is unrealistically smooth. If they added a couple weird, impactful events, it would be more realistic in its weirdness, but of course it would be simultaneously less realistic in that those particular events are unlikely to occur. This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027, but the median narrative is probably around 2030 or 2031.

    Further walking the timeline back, adding qualifiers and exceptions that the authors of AI 2027 somehow didnā€™t explain before. Also, the reason AI 2027 didnā€™t have any mention of Trump blowing up the timeline doing insane shit is because Scott (and maybe some of the other authors, idk) like glazing Trump.

    I expect the bottlenecks to pinch harder, and for 4x algorithmic progress to be an overestimateā€¦

    No shit, that is what every software engineering blogging about LLMs (even the credulous ones) say, even allowing LLMs get better at raw code writing! Maybe this author is better in touch with reality than most lesswrongersā€¦

    ā€¦but not by much.

    Nope, they still have insane expectations.

    Most of my disagreements are quibbles

    Then why did you bother writing this? Anyway, I feel like this author has set themselves up to claim credit when itā€™s December 2027 and none of AI 2027ā€™s predictions are true. Theyā€™ll exaggerate their ā€œquibblesā€ into successful predictions of problems in the AI 2027 timeline, while overlooking the extent to which they agreed.

    Iā€™ll give this author +10 bayes points for noticing Trump does unpredictable batshit stuff, and -100 for not realizing the real reason why Scott didnā€™t include any call out of that in AI 2027.



  • Oh lol, yeah I forget he originally used lesswrong as a penname for HPMOR (he immediately claimed credit once it actually got popular).

    So the problem is lesswrong and Eliezer was previously obscure enough that few academic or educated sources bothered debunking them, but still prolific to get lots of casual readers. Sneerclub makes fun of their shit as it comes up, but effort posting is tiresome, so our effort posts are scattered among more casual mockery. There is one big essay connecting dots written by serious academic (Timnit Gebru and Emile Torres): https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599 . They point out common people between lesswrong, effective altruists, transhumanists, extropians, etc, and explain how the ideologies are related and how they originated.

    Also a related irony, Timnit Gebru is interested and has written serious academic papers about algorithmic bias and AI ethics. But for whatever reason (Because sheā€™s an actual academic? Because she wrote a paper accurately calling them out? Because of the racists among them who are actually in favor of algorithmic bias?) ā€œAI safetyā€ lesswrong people hate her and are absolutely not interested in working with the AI ethics field of academia. In a world where they were saner and less independent minded cranks, lesswrong and MIRI could tried to get into the field of AI ethics and used that to sanewash and build reputation/respectability for themselves (and maybe even tested their ideas in a field with immediately demonstrable applications instead of wildly speculating about AI systems that arenā€™t remotely close to existing). Instead, they only sort of obliquely imply AI safety is an extension of AI ethics whenever their ideas are discussed in mainstream news sources but donā€™t really maintain the facade if actually pressed on it (Iā€™m not sure how much of it is mainstream reporters trying to sanewash them or deliberate deception on their part).

    For a serious but much gentler rebuttal of Effective Altruism, there is this blog: https://reflectivealtruism.com/ . Note this blog was written by an Effective Altruist trying to persuade other EAs of the problem, so they often extend too much credit to EA and lesswrong in an effort to get their points across.

    ā€¦and I realized you may not have context on the EAsā€¦ they are a movement spun off of academic thinking about how to do charity most effectively, and lesswrong was a major early contributor in terms of thinking and members to their movement (they also currently get members from more mainstream recruiting, so it occasionally causes clashes when more mainstream people look around and notice the AI doom-hype and the pseudoscientific racism). So like half EAā€™s work is how to do charity effectively through mosquito nets to countries with malaria problems or paying for nutrition supplements to malnourished children or paying for anti-parasitic drugs to stopā€¦ and half their work is funding stuff like ā€œAI safetyā€ research or eugenics think tanks. Oh, and the EAā€™s utilitarian ā€œearn to giveā€ concept was a major inspiration for Sam Bankman Fried trying to make a bunch of money through FTX, so thatā€™s another dot connected! (And SBF got a reputation boost from his association with them, and in general their is the issue of billionaire philanthropists reputation laundering and buying influence through philanthropy, so add that to the pile of problems with EA).

    Edit: I realized you were actually asking for books about real rationality, not resources deconstructing rationalistsā€¦ so ā€œThinking, Fast and Slowā€ is the book on cognitive biases the Eliezer cribs from. Douglas Hofstadter has a lot of interesting books on philosophical thinking in computer science terms: ā€œGodel, Escher, Bachā€ and ā€œI am a strange loopā€. In some ways GEB is dated, but I think that adds context to it that makes it better (in that you can immediately see how the books is flawed so you donā€™t think computer science can replace all other fields). The institute Timnit Gebru is a part of looks like a good source for academic writing on real AI harms: https://www.dair-institute.org/ (but I havenā€™t actually read most of her work yet, just the TESCREAL essay and skimmed a few of her other writings),






  • scruiser@awful.systemstoSneerClub@awful.systemsā€¢Moldbug has a sad
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    5 days ago

    Yeah the genocidal imagery was downright unhinged, much worse than I expected from what little Iā€™ve previously read of his. I almost wonder how ideological adjacent allies like Siskind can still stand to be associated with him (but not really, Siskind can normalize any odious insanity if it serves his purposes).


  • scruiser@awful.systemstoSneerClub@awful.systemsā€¢Moldbug has a sad
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    5 days ago

    His fears are my hope, that Trump fucking up hard enough will send the pendulum of public opinion the other way (and then the Democrats use that to push some actually leftist policies throughā€¦ itā€™s a hope not an actual prediction).

    He cultivated this incompetence and worshiped at the altar of the Silicon Valley CEO, so seeing him confronted with Elonā€™s and Trumpā€™s clumsy incompetence is some nice schadenfreude.



  • Soā€¦ on strategies for explaining to normies, a personal story often grabs people more than dry facts, so you could focus on the narrative of Eliezer trying big idea, failing or giving up, and moving on to bigger ideas before repeating (stock bot to seed AI to AI programming language to AI safety to shut down all AI)? Youā€™ll need the wayback machine, but it is a simple narrative with a clear pattern?

    Or you could focus on the narrative arc of someone that previously bought into less wrong? I donā€™t volunteer, but maybe someone else would be willing to take that kind of attention?

    I took a stab at both approaches here: https://awful.systems/comment/6885617


  • This isnā€™t debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you donā€™t like that you can leave (or post a few more times for us to laugh at before youā€™re banned).

    As to the particular paper that got linked, weā€™ve seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us werenā€™t going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.


  • Big effort postā€¦ reading it will still be less effort than listening to the full Behind the Bastards podcast, so I hope you appreciate itā€¦

    To summarize it from a personal angleā€¦

    In 2011, I was a high schooler who liked Harry Potter fanfics. I found Harry Potter And The Methods of Rationality a fun story, so I went to the lesswrong website and was hooked on all the neat pop-science explanations. The AGI stuff and cryonics and transhumanist stuff seemed a bit fanciful but neat (after all, the present would seem strange and exciting to someone from a hundred years ago). Fast forward to 2015, HPMOR was finally finishing, I was finishing my undergraduate degree, and in the course of getting a college education I had actually taken some computer science and machine learning courses. Reconsidering lesswrong with my level of education thenā€¦ I noticed MIRI (the institute Eliezer founded) wasnā€™t actually doing anything with neural nets, they were playing around with math abstractions, and they hadnā€™t actually published much formal writing (well not actually any, but at the time I didnā€™t appreciate peer-review vs. self publishing and preprints), and even the informal lesswrong posts had basically stopped. I had gotten into a related blog, slatestarcodex (written by Scott Alexander), which filled some of the same niche, but in 2016 Scott published a defense of Trump normalizing him, and I realized Scott had an agenda at cross purposes with the ā€œcenter-leftā€ perspective he portrayed himself as. At around that point, I found the reddit version of sneerclub and it connected a lot of dots I had been missing. Far from the AI expert he presented himself as, Eliezer had basically done nothing but write loose speculation on AGI and pop-science explanations. And Scott Alexander was actually trying to push ā€œhuman biodiversityā€ (i.e. racism disguised in pseudoscience) and neoreactionary/libertarian beliefs. From there, it became apparent to me a lot of Eliezerā€™s claims werenā€™t just a bit fanciful, they were actually really really ridiculous, and the community he had setup had a deeply embedded racist streak.

    To summarize it focusing on Eliezerā€¦

    Late 1990s Eliezer was on various mailing lists, speculating with bright eyed optimism about nanotech and AGI and genetic engineering and cryonics. He tried his hand at getting in on it, first trying to write a stock trading botā€¦ which didnā€™t work, then trying to write up seed AI (AI that would bootstrap to strong AGI and change the world)ā€¦ which also didnā€™t work; then trying to develop a new programming language for AIā€¦ which he never finished. Then he realized he had been reckless, an actually successful AI might have destroyed mankind, so really it was lucky he didnā€™t succeed, he needed to figure out how to align an AI first. So from the mid 2000s on he started getting donors (this is where Thiel comes in) to fund his research. People kind of thought he was a crank, or just didnā€™t seem concerned with his ideas, so he concluded they must not be rational enough, and set about, first on Overcoming bias, then his own blog, lesswrong, writing a sequence of blog posts to fix that (and putting any actual AI research on hold). They got moderate attention which exploded in the early 2010s when a side project of writing Harry Potter fanfiction took off. He used this fame to get more funding and spread his ideas further. Finally, around mid 2010s, he pivoted to actually trying to do AI research againā€¦ MIRI has a sparse (compared to number of researchers they hired and how productive good professors in academia are) collection of papers focused on an abstract concept for AI called AIXI, that basically depends on having infinite computing power and isnā€™t remotely implementable in the real world. Last I checked they didnā€™t get any further than that. Eliezer was skeptical of neural network approaches, derisively thinking of them as voodoo science trying to blindly imitate biology with no proper understanding, so he wasnā€™t prepared for NN taking off mid 2012 and leading to GPT and LLM approaches. So when ChatGPT starts looking impressive, he starts panicking, leading to him going on a podcast circuit professing doom (after all if he and his institute couldnā€™t figure out AI alignment, no one can, and weā€™re likely all doomed for reasons he has written tens of thousands of words in blog posts about without being refuted at a quality he believes is valid).

    To tie off some side points:

    • Peter Thiel was one of the original funders of Eliezer and his institution. It was probably a relatively cheap attempt to buy reputation, and it worked to some extent. Peter Thiel has cut funding since Eliezer went full doomer (Thiel probably wanted Eliezer as a silicon valley hype man, not an apocalypse cult).

    • As Scott continued to write posts defending the far-right with a weird posture of being center-left, Slatestarcodex got an increasingly racist audience, culminating in a spin-off forum with full on 14 words white supremacists. He has played a major role in the alt-right pipeline that is some of Trumpā€™s most loyal supporters.

    • Lesswrong also attracted some of the neoreactionaries (libertarian wackjobs that want a return to monarchy), among them Menicus Moldbug (real name Curtis Yarvin). Yarvin has written about strategies for dismantling the federal government, which DOGE is now implementing

    • Eliezer may not have been much of a researcher himself, but he inspired a bunch of people, so a lot of OpenAI researchers buy into the hype and/or doom. Sam Altman uses Eliezerā€™s terminology as marketing hype.

    • As for lesswrong itselfā€¦ what is original isnā€™t good and whatā€™s good isnā€™t original. Lots of the best sequences are just a remixed form of books like Kahnemanā€™s ā€œThinking, Fast and Slowā€. And the worst sequences demand you favor Eliezerā€™s take on bayesianism over actual science, or are focused on the coming AI salvation/doom.

    • other organizations have taken on the ā€œAI safetyā€ mantle. They are more productive than MIRI, in that they actually do stuff with actually implemented ā€˜AIā€™, but what they do is typically contrive (emphasis on contrive) scenarios where LLMs will ā€œactā€ ā€œdeceptiveā€ or ā€œpower seekingā€ or whatever scary buzzword you can imagine and then publish papers about it with titles and abstracts that imply the scenarios are much more natural than they really are.

    Feel free to ask any follow-up questions if you genuinely want to know more. If you actually already know about this stuff and are looking for a chance to evangelize for lesswrong or the coming LLM God, the mods can smell that out and you will be shown the door, so donā€™t bother (we get one or two people like that every couple of weeks).


  • The sequence of links hopefully lays things out well enough for normies? I think it it does, but Iā€™ve been aware of the scene since the mid 2010s, so Iā€™m not the audience that needs it. I can almost feel sympathy for Sam dealing with all the doomers, except he uses the doom and hype to market OpenAI and he lied a bunch so not really. And I can almost feel sympathy for the board, getting lied to and outmaneuvered by a sociopathic CEO, but they are a bunch of doomers from the sound of it so, eh. I would say they deserve each other, its the rest of the world that donā€™t deserve them (from the teacher dealing with the LLM slop plugged into homework, to the Website Admin fending off scrapers, to legitimate ML researchers getting the attention sucked away while another AI winter starts to loom, to the machine cultist not saving a retirement fund and having panic attacks over the upcoming salvation or doom).


  • As to cryonicsā€¦ for both LLM doomers and accelerationists, they have no need for a frozen purgatory when the techno-rapture is just a few years around the corner.

    As for the rest of the shiny futuristic dreams, they have give way to ugly practical realities:

    • no magic nootropics, just Scott telling people to take adderal and other rationalists telling people to micro dose on LSD

    • no low hanging fruit in terms of gene editing (as epistaxis pointed out over on reddit) so theyā€™re left with eugenics and GeneSmithā€™s insanity

    • no drexler nanotech so they are left hoping (or fearing) the god-AI can figure it (which is also a problem for ever reviving cryonically frozen people)

    • no exocortex, just over priced google glasses and a hallucinating LLM ā€œassistantā€

    • no neural jacks (or neural lace or whatever the cyberpunk term for them is), just Elon murdering a bunch of lab animals and trying out (temporary) hope on paralyzed people

    The future is here, and itā€™s subpar compared to the early 2000s fantasies. But hey, you can rip off Ghibliā€™s style for your shitty fanfic projects, so there are a few upsides.


  • Even without the Sci-fi nonsense, the political elements of the story also feel absurd: the current administration staying on top of the situation and making reasoned (if not correct) responses and keeping things secret feels implausible given current events. It kind of shows the political biases of the authors that they can manage to imagine the Trump administration acting so normally or competently. Oh and the hyper-competent Chinese spies (and the Chinese having no chance at catching up without them) feels like another one of the authorsā€™ biases coming through.




  • He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.

    His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

    And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far

    My own scoring:

    The first prompt programming libraries start to develop, along with the first bureaucracies.

    I donā€™t think any sane programmer or scientist would credit the current ā€œprompt engineeringā€ ā€œskill setā€ with comparison to programming libraries, and AI agents still arenā€™t what he was predicting for 2022.

    Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.

    There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.

    Revenue is high enough to recoup training costs within a year or so.

    Hahahaha, noā€¦ they are still losing money per customer, much less recouping training costs.

    Instead, the AIs just make dumb mistakes, and occasionally ā€œpursue unaligned goalsā€ but in an obvious and straightforward way that quickly and easily gets corrected once people notice

    The safety researchers have made this one ā€œtrueā€ by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that donā€™t read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.

    The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics.

    They also try to contrive scenarios

    Emphasis on the word"contrive"

    The age of the AI assistant has finally dawned.

    So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.

    I can see how they are trying to anoint his as a prophet, but I donā€™t think anyone not already drinking the kool aid will buy it.