Spaces:
Running
Running
Deepseek r1 32b model is reasoning less and often answering without accuracy
#673
by
rishadsojon
- opened
So, I am using both deepseek chat and huggingface chat, thanks to huggingface for inferencing the model, but honestly it is sucking and its answer is way behind deepseek chat which probably uses the r1 700b model, whatever I understand and i think there's issue with the configuration of the model or else why it would answer like a 1.5 billion model, if they are trying to restrict token generation then I would say please don't do that cause it's ruining the model's ability to reason and that's the reason i am using this model over chatgpt, requesting huggingface to take action asap.
This model isn't the original DeepSeek R1... it was never intelligent, anyway.