There’s an idea floating around that DeepSeek’s well-documented censorship only exists at its application layer but goes away if you run it locally (that means downloading its AI model to your computer).

But DeepSeek’s censorship is baked-in, according to a Wired investigation which found that the model is censored on both the application and training levels.

For example, a locally run version of DeepSeek revealed to Wired thanks to its reasoning feature that it should “avoid mentioning” events like the Cultural Revolution and focus only on the “positive” aspects of the Chinese Communist Party.

A quick check by TechCrunch of a locally run version of DeepSeek available via Groq also showed clear censorship: DeepSeek happily answered a question about the Kent State shootings in the U.S., but replied “I cannot answer” when asked about what happened in Tiananmen Square in 1989.

  • Breve@pawb.social
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    3
    ·
    5 hours ago

    This is literally a nothing burger story. All models have some “censorship” baked in, this one came from China and thus they put in some guardrails to avoid the government coming down on them so they don’t end up in jail. US models do the exact same thing to comply with the US government’s own limits on free speech, which still also exist even if they are less restrictive than the Chinese government.

      • Jakeroxs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Because the majority of people talking about deepseek lately don’t know the first thing about LLMs lol

      • Breve@pawb.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        Hosted versions of the model can do additional screening of the input and output of the model, so running the model locally would be “less” censored because of that. OpenAI has been shown to be doing the same, so also “censorship”.

        The irony is that LLMs are trained to follow instructions and lack critical reasoning, so even these multiple layers of screening still fail if you can trick it.

      • Tarquinn2049@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        It’s more like, running it locally gives you the possibility of altering it to be uncensored. But you either have to know how, or someone would have to put a package together.

  • theunknownmuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 hours ago

    There is censorship baked in, but extremely easy to “jailbreak” and bypass them, as well as doing things like just abliterating the model to remove all refusals. Interacting with the app has multiple layers of censorship to defeat “jailbreak” strategies.

  • deegeese@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    6 hours ago

    At least unlike “Open”AI, it’s open source so you can see and fix its biases.