And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.
And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.
Everyone who didn’t get an echo as a gift, I’d imagine
Musk has an AI project. Techbros have deliberately been sucking up to Trump. I’m pretty sure AI training will be declared fair use and copyright laws will remain the same for everybody else.
As you say, LLMs have really useful applications. The problem is that “being a reliable virtual assistant” is not one of them. This current push is driven by shareholders and companies who are afraid to be seen as missing out. It’s the classic case of having what you think is a solution and trying to find the problem, rather than starting from a problem and trying to find a solution.
Someone has died due to a touchscreen. A woman had a Tesla which you put in park forwards or reverse with a touchscreen. She’d always had trouble with it and got it wrong and reversed into a pond. That meant the power went out so she couldn’t open that door. To get to the emergency escape handle you have to remove the speakers in the doors. So she drowned.
The kicker? Her husband was a millionaire and he immediately put out a statement absolving Tesla and musk from any wrongdoing.
Unless everybody now says “I’ll never buy anything from dodge”. If it doesn’t impact sales it really will become the new norm.
I’ve no idea if they’ve changed course again. I went there out of curiosity after this post but it’s all hidden behind a log in screen and i couldn’t be arsed to sign up or try to remember or find my credentials.
But since you don’t really want to hide your content if you’re small and trying to grow, I’m going to assume that it’s insular and doesn’t want people to stumble on its content.
I blame the producers. if they’d just done one film per book all would have been fine
Divergent is a terrible series that Shailene Woodeley absolutely acts her socks off in
The entry for Outer Wilds should just read “nothing. Don’t try to learn anything about this game before playing”.
I don’t think Vance is a useful idiot. That implies that he doesn’t understand the consequences of what he’s doing. He’s nominally second in command of a party that’s working to ultimately put his wife and kids in camps. I think he understands what he’s doing. He’s just gambling on it being successful enough and retaining enough power that he won’t personally see those consequences.
Daredevil never really landed for me, but i still welcome this very much because they’re apparently bringing back the rest of the Defenders and one thing i did love was Jessica Jones.
Yeah, the vibe was definitely left-wing before, and it was a shock to many of the more prominent members. You started getting the appearance of subs which were “this is to discuss how Jews control all the money globally” and prominent members who avoided saying anything completely explicit on the site, but whose twitter accounts would reveal them to be prominent members of swastika-using Nazi groups.
The owner declared himself to be “a free speech absolutist” and that he wanted to promote conflict on the site and a lot of people, myself included, just went “yeah, I see where this is going” and left.
Squabblr’s the one where the owner wuickly outed himself as a right-wing transphobe and caused a mass-exodus of early adopters, wasn’t it?
If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.
You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.
The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.