is that Link??
Hi, Iām Eric and I work at a big chip company making chips and such! I do math for a job, but itās cold hard stochastic optimization that makes people who know names like Tychonoff and Sylow weep.
My pfp is Hank Azaria in Heat, but you already knew that.
is that Link??
Like, even if I believed in FOOM, Iāll take my chances with the stupid sexy basilisk š over radiation burns and itās not even fucking close.
Neo-Nazi nutcase having a normal one.
Itās so great that this isnāt falsifiable in the sense that doomers can keep saying, well āonce the model is epsilon smarter, then youāll be sorry!ā, but back in the real world: the model has been downloaded 10 million times at this point. Somehow, the diamanoid bacteria has not killed us all yet. So yes, we have found out the Yud was wrong. The basilisk is haunting my enemies, and she never misses.
Bonus sneer: āwe are going to find out if Yud was rightā Hey fuckhead, he suggested nuking data centers to prevent models better than GPT4 from spreading. R1 is better than GPT4, and it doesnāt require a data center to run so if we had acted on Yudās geopolitical plans for nuclear holocaust, billions would have been for incinerated for absolutely NO REASON. How do you not look at this shit and go, yeah maybe donāt listen to this bozo? Iāve been wrong before, but god damn, dawg, Iāve never been starvingInRadioactiveCratersWrong.
excuse me, what the fuck is this
Folks around here told me AI wasnāt dangerous š° ; fellas I just witnessed a rogue Chinese AI do 1 trillion dollars of damage to the US stock market š /s
Next Sunday when I go to my EA priestās group home, I will admit to having invoked the chain rule to compute a gradient 1 trillion times since my last confessional. For this I will do penance for the 8 trillion future lives I have snuffed out and whose utility has been consumed by the basilisk.
Me: Oh boy, I canāt wait to see what my favorite thinkers of the EA movement will come up with this week :)
Text from Geoff: "Morally stigmatize AI developers so they considered as socially repulsive as Nazi pedophiles. A mass campaign of moral stigmatization would be more effective than any amount of regulation. "
Another rationalist W: donāt gather empirical evidence that AI will soon usurp / exterminate humanity. Instead as the chief authorities of morality, engage in societal blackmail to anyone whoās ever heard the words TensorFlow.
Spotted in the Wild:
Does scoot actually know how computers work? Asking for a friend.
deleted by creator
My father-in-law is a hoarder of both physical and digital things. His house is filled with hard drives where he has like stored copies of every movie ever made as mp4s and then he sends the drives to us because he has no physical space for them since he has junk from like 30 years ago piling up in the living room. So now my house is filled with random ass hard drives of (definitely not pirated) movies.
I knew there was a reason I couldnt part with my CD tower.
Itās just pure grift, theyāve creating an experiment with an outcome that tells us no new information. Even if models stop āimprovingā today, itās a static benchmark and by EOY worked solutions will be leaked into the training of any new models, so performance will saturate to 90%. At which point, the Dan and the AI Safety folks at his fake ass not-4-profit can clutch their pearls and claim humanity is obsolete so they need more billionaire funding to save us & Sam and Dario can get more investors to buy them gpus. If anything, Iām hoping the Frontier Math debacle would inoculate us all against this bullshit (at least I think itās stolen some of the thunder from their benchmarkās attempt to hype the end of daysš« )
Trump promised me heād get the price of them down- Iām sure we can start a gofundme to replace the gay peopleās eggs
has data access to much but not all of the dataset.
Huh! I wonder what part of the dset had the 25% of questions they got right in it š
I canāt believe they fucking got me with this one. I remember back in August(?) Epoch was getting quotes from top mathematicians like Tarrence Tao to review the benchmark and he was quoted saying like it would be a big deal for a model to do well on this benchmark, it will be several years before a model can solve all these questions organically etc so when O3 dropped and got a big jump from SotA, people (myself) were blown away. At the same time red flags were going up in my mind: Epoch was yapping about how this test was completely confidential and no one would get to see their very special test so the answers wouldnāt get leaked. But then how in the hell did they evaluate this model on the test? Thereās no way O3 was run locally by Epoch at ~$1000 a question -> OAI had to be given the benchmark to run against in house -> maybe they had multiple attempts against it and were tuning the model/ recovering questions from api logs/paying mathematicians in house to produce answers to the problems so they could generate their own solution set??
No. The answer is much stupider. The entire company of Epoch ARE mathematicians working for OAI to make marketing grift to pump the latest toy. They got me lads, I drank the snake oil prepared specifically for people like me to drink :(
Terrible news: the worst person I know just made a banger post.