• Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    5 months ago

    We get it, we just don’t agree with the assumptions made. Also love that he is now broadening the paperclips thing into more things, missing the point of the paperclips thing abstracting from the specific wording of the utility function (because like with disaster prepare people preparing for zombie invasions, the actual incident doesn’t matter that much for the important things you want to test). It is quite dumb, did somebody troll him by saying ‘we will just make the LLM not make paperclips bro?’ and he got broken so much by this that he is replying up his own ass with this talk about alien minds.

    e: depressing seeing people congratulate him for a good take. Also “could you please start a podcast”. (A schrodinger’s sneer)

    • BigMuffin69@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      5 months ago

      did somebody troll him by saying ‘we will just make the LLM not make paperclips bro?’

      rofl, I cannot even begin to fathom all the 2010 era LW posts where peeps were like, “we will just tell the AI to be nice to us uwu” and Yud and his ilk were like “NO DUMMY THAT WOULDNT WORK B.C. X Y Z .” Fast fwd to 2024, the best example we have of an “AI system” turns out to be the blandest, milquetoast yes-man entity due to RLHF (aka, just tell the AI to be nice bruv strat). Worst of all for the rats, no examples of goal seeking behavior or instrumental convergence. It’s almost like the future they conceived on their little blogging site shares very little in common with the real world.

      If I were Yud, the best way to salvage this massive L would be to say “back in the day, we could not conceive that you could create a chat bot that was good enough to fool people with its output by compressing the entire internet into what is essentially a massive interpolative database, but ultimately, these systems have very little do with the sort of agentic intelligence that we foresee.”

      But this fucking paragraph:

      (If a googol monkeys are all generating using English letter-triplet probabilities in a Markov chain, their probability of generating Shakespeare is vastly higher but still effectively zero. Remember this Markov Monkey Fallacy anytime somebody talks about how LLMs are being trained on human text and therefore are much more likely up with human values; an improbable outcome can be rendered “much more likely” while still being not likely enough.)

      ah, the sweet, sweet aroma of absolute copium. Don’t believe your eyes and ears people, LLMs have everything to do with AGI and there is a smol bean demon inside the LLMs that is catastrophically misaligned with human values that will soon explode into the super intelligent lizard god the prophets have warned about.