this post was submitted on 27 Jan 2025
209 points (90.0% liked)

196

17571 readers
1442 users here now

Be sure to follow the rule before you head out.


Rule: You must post before you leave.



Other rules

Behavior rules:

Posting rules:

NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.

If you have any questions, feel free to contact us on our matrix channel or email.

Other 196's:

founded 2 years ago
MODERATORS
 
all 45 comments
sorted by: hot top controversial new old
[–] Asafum@feddit.nl 73 points 3 months ago

I tried asking it what it couldn't discuss and it mentioned misinformation, so I asked it what would be an example of misinformation about the Chinese government and it started to give an answer and then it must have said something wrong because it basically went "oh shit" and deleted the response replacing it with the generic "I'm afraid I can't do that Dave."

[–] Yerbouti@sh.itjust.works 64 points 3 months ago* (last edited 3 months ago) (2 children)

I asked it about human rights in China in the browser version. It actually wrote a fully detailed answer, explaining that it is reasonable to conclude that China violates human rights, and the reply disappear right in my face while I was reading. I manage to repeat that and record my screen. The interesting thing to know is that this wont happened if you run it locally, I've just tried it and the answer wasn't censored.

[–] drivepiler@lemmy.world 21 points 3 months ago (1 children)

I asked it about "CCP controversies" in the app and it did the exact same thing twice. Fully detailed answer removed after about 1 second when it finished.

[–] sushibowl@feddit.nl 15 points 3 months ago (1 children)

Most likely there is a separate censor LLM watching the model output. When it detects something that needs to be censored it will zap the output away and stop further processing. So at first you can actually see the answer because the censor model is still "thinking."

When you download the model and run it locally it has no such censorship.

[–] Swedneck@discuss.tchncs.de 2 points 3 months ago

what i don't understand is why they won't just delay showing the answer for a while to prevent this, sure that's a bit annoying for the user but uhhhhh... it's slightly more jarring to see an answer getting deleted like the llm is being shot in the head for saying the wrong thing..

[–] kromem@lemmy.world 60 points 3 months ago (1 children)

This seems like it may be at the provider level and not at the actual open weights level: https://x.com/xlr8harder/status/1883429991477915803

So a "this Chinese company hosting a model in China is complying with Chinese censorship" and not "this language model is inherently complying with Chinese censorship."

[–] ArchRecord@lemm.ee 8 points 3 months ago (2 children)

I'm running the 1.5b distilled version locally and it seems pretty heavily censored at the weights level to me.

[–] kromem@lemmy.world 3 points 3 months ago

There is a reluctance to discuss at a weight level - this graphs out refusals for criticism of different countries for different models:

https://x.com/xlr8harder/status/1884705342614835573

But the OP's refusal is occurring at a provider level and is the kind that would intercept even when the model relaxes in longer contexts (which happens for nearly every model).

At a weight level, nearly all alignment lasts only a few pages of context.

But intercepted refusals occur across the context window.

[–] Swedneck@discuss.tchncs.de 1 points 3 months ago (1 children)

i wouldn't say it's heavily censored, if you outright ask it a couple times it will go ahead and talk about things in a mostly objective manner, though with a palpable air of a PR person trying to do damage control.

[–] ArchRecord@lemm.ee 2 points 3 months ago

The response from the LLM I showed in my reply is generally the same any time you ask almost anything negative about the CCP, regardless of the possible context. It almost always starts with the exact words "The Chinese Communist Party has always adhered to a people-centered development philosophy," a heavily pre-trained response that wouldn't show up if it was simply generally biased from, say, training data. (and sometimes just does the "I can't answer that" response)

It NEVER puts anything in the <think> brackets you can see above if the question is even slightly possibly negative about the CCP, which it does with any other prompt. (See below, asking if cats or dogs are better, and it generating about 4,600 characters of "thoughts" on the matter before even giving the actual response.

Versus asking "Has China ever done anything bad?"

Granted, this seems to sometimes apply to other countries, such as the USA too:

But in other cases, it explicitly will think about the USA for 2,300 characters, but refuse to answer if the exact same question is about China:

Remember, this is all being run on my local machine, with no connection to DeepSeek's servers or web UI, directly in terminal without any other code or UI running that could possibly change the output. To say it's not heavily censored at the weights level is ridiculous.

[–] schteph@lemmy.world 20 points 3 months ago
[–] kittenzrulz123@lemmy.blahaj.zone 18 points 3 months ago
[–] OmegaLemmy@discuss.online 15 points 3 months ago

Asked stuff in Turkish, started with Xinjiang, then journalism, and then journalism in Xinjiang, it searched the web and by the final sentence...

"Sorry, I can't help with that."

[–] anas@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

I don’t think I’ve ever seen a post on this sub that doesn’t have “rule” in the title before

[–] BB84@mander.xyz 3 points 3 months ago (2 children)

@jerryh100@lemmy.world Wrong community for this kind of post.

@BaroqueInMind@lemmy.one Can you share more details on installing it? Are you using SGLang or vLLM or something else? What kind of hardware do you have that can fit the 600B model? What is your inference tok/s?

[–] needanke 13 points 3 months ago* (last edited 3 months ago) (1 children)

Wrong community for this kind of post.

Nor really, 196 is a anything goes community after all.

[–] spujb@lemmy.cafe 5 points 3 months ago (2 children)

AI generated content is against the community rules see the sidebar :)

[–] SreudianFlip@sh.itjust.works 16 points 3 months ago (2 children)

I’m here for the performative human part of the testing. Exposing AI is human generated content.

[–] BB84@mander.xyz 7 points 3 months ago (1 children)

I just really hope the 2023 "I asked ChatGPT and it said !!!!!" posts don't make a comeback. They are low-effort and meaningless.

[–] SatanClaus@lemmy.dbzer0.com 5 points 3 months ago

True. This specific model is relevant culturally right now though. It's a rock in a hard place sometimes lol

[–] spujb@lemmy.cafe 4 points 3 months ago

just giving context to their claim. in the end it’s up to mods how they want to handle this, i could see it going either way.

[–] A_Very_Big_Fan@lemmy.world 6 points 3 months ago

Prompts intended to expose authoritarian censorship are okay in my book

[–] Juice@midwest.social 1 points 3 months ago

Censorship is when AI doesn't regurgitate my favorite atrocity porn

[–] Swedneck@discuss.tchncs.de 1 points 3 months ago

It isn't actually, this is a separate layer of censorship ontop of the actual deepseek model.
If you run the model locally via ollama it won't output answers like that, it'll basically just act like a chinese official broadcast live on BBC who has been firmly instructed to avoid outright lies.

[–] Smorty@lemmy.blahaj.zone -1 points 3 months ago (1 children)

No way people excited about LLMs here!? yay <3

[–] GroupNebula563@lemmy.world 3 points 3 months ago* (last edited 3 months ago)

I think you might want to read the whole post.