One of Spez’s answers in the infamous Reddit AMA struck me
Two things happened at the same time: the LLM explosion put all Reddit data use at the forefront, and our continuing efforts to reign in costs…
I am beginning to think all they wanted to do was getting their share of the AI pie, since we know Reddit’s data is one of the major datasets for training conversetional models. But they are such a bunch of bumbling fools, as well as being chronically understaffed, the whole thing exploded in their face. At this stage their only chance if survival may well be to be bought out by OpenAI…
I’m very sure that this is the case. Reddit is pissed they gave away all the content as training data for free while struggling to monetize their platform adequately.
But I suspect the damage is already done. There are projects like “Orca” from Microsoft that skip the learning process from source data for a big part by using chatGPT and GPT4.
They missed the timing but are too stubborn and double down on it
What’s more, chat-gpt 4 is near the upper bound of what you can collect on the web in that way. They basically took everywhere you’d look to for information and grabbed it along with as much structure as they could… There’s plenty more information on the Internet, but the structure and quality are much lower. It’s very data poor and unstructured interactions between humans
Moving forward, everyone is talking about synthetic data sets - you can’t go bigger without some system to generate (or refine) training data - and if you have to generate the data anyways, you’re not going to pay much for a dataset that is just decent
So yeah, Reddit most definitely missed the timing.
I think Elon’s claims that he’s made Twitter profitable (despite a lot of evidence to the contrary) is also creating pressure for the other social networks to chase overly aggressive monetization schemes