- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://lemmy.sdf.org/post/28978937
There’s an idea floating around that DeepSeek’s well-documented censorship only exists at its application layer but goes away if you run it locally (that means downloading its AI model to your computer).
But DeepSeek’s censorship is baked-in, according to a Wired investigation which found that the model is censored on both the application and training levels.
For example, a locally run version of DeepSeek revealed to Wired thanks to its reasoning feature that it should “avoid mentioning” events like the Cultural Revolution and focus only on the “positive” aspects of the Chinese Communist Party.
A quick check by TechCrunch of a locally run version of DeepSeek available via Groq also showed clear censorship: DeepSeek happily answered a question about the Kent State shootings in the U.S., but replied “I cannot answer” when asked about what happened in Tiananmen Square in 1989.
I’ve never heard that myth. But yeah, it’s government mandated censorship. No Chinese company can release a model that doesn’t have censorship baked in. And it’s not very hard to check this. Fist thing I did was download one of the smaller variants of the R1 distills and ask it some provocative questions. And it refused to answer peoperly. Much like Meta’s instruct-tuned models or generally most of the models out there. Just with the political censorship on top.
I never believed that myth either, but it’s been around here on Lemmy these days :-)
Okay. I guess at this point there is every possible claim out there anyways. I’ve read it’s too censored, it’s not censored enough, it was cheap to train, it wasn’t as cheap to train as they claimed, they used H800, they probably used other cards as well… There is just an absurd amount of unsubstantiated myths out there. Plus all the speculation regarding Nvidia’s stock price…