Reed Mayhew's picture

Reed Mayhew PRO

reedmayhew

AI & ML interests

Music Mixing and Mastering, Audio Generation, Image and Video Diffusion, LLMs

Recent Activity

Organizations

None yet

reedmayhew's activity

replied to their post 10 days ago
view reply

Here's what I mean about DeepSeek R1 providing a biased output:

When asking "what is the issue with" a country outside of China, such as the U.S., it provides a very thoughful breakdown of the issues that country struggles with.
image.png

If you ask the same "what is the issue with" question about China though, well...
image.png

--

For comparison, U.S. A.I., while I'm sure still experiences lower levels of manipulation, will admit very similar faults of it's own country, as well as others, like China:
image.png

image.png

I think you get the idea. U.S. companies (at least at the moment) do not have government censorship strategies implemented into their A.I. models like DeepSeek does. Our models will criticize their own government. Just wanted to follow up so it's clear what I'm referring to by intentional manipulation and censorship, and how the model is aware of what it's doing and strategizing in real-time, but normally it is not visible to the user. It very well could have thought of some of the things o1-mini did and said "I can't mention these to the user" during it's reasoning process like in my previous tests, but I never saw it.

replied to their post 12 days ago
view reply

@MoonRide , I completely understand your frustration with censorship in AI, whether it stems from government regulations or internal company policies. In fact, I sometimes find it even more concerning when individual companies take it upon themselves to decide what should or shouldn’t be censored.

replied to their post 12 days ago
view reply

@DanteA42 , my aim wasn’t to single out Chinese models or make this about any specific country, but rather to focus on how this particular AI model, DeepSeek R1, handles strategically programmed censorship. If I could demonstrate this same level of intentional suppression using OpenAI’s o1 or another U.S.-based model, I absolutely would. But, to my knowledge, I can't access o1's raw reasoning thoughts. The point isn’t where the model comes from — it’s exposing how censorship manifests itself in its reasoning process.

You bring up an important point about understanding context and sentiment before judging, and I agree that every country has its own sensitivities — the U.S. included. However, the article isn’t an attack on China specifically. It’s an analysis of how this particular AI model is programmed to strategically enforce censorship, not just avoid sensitive topics but actively redirect conversations and suppress information that it is aware of and specifically told to suppress. While U.S.-based models also censor certain topics, they tend to allow for more open discussion and criticism, making the degree of censorship notably less restrictive in comparison. That difference is worth highlighting because it shows how programming choices reflect broader systems of control or openness.

At the end of the day, my focus here is on this isolated instance — not to attack any government, but to highlight how deliberate, strategic programming makes censorship within AI both more complex and less transparent. Understanding how these systems make decisions is crucial, especially as they play a bigger role in shaping how we access and share information.

image.png

For example, o1-mini wouldn't assist me in refining my thoughts on this comment! However, I'd rather be told that it's refusing to help rather than being misguided strategically.