Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ModelCloud
/
DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
like
7
Follow
ModelCloud.AI
58
Text Generation
Safetensors
qwen2
gptqmodel
modelcloud
chat
qwen
deepseek
instruct
int4
gptq
4bit
W4A16
conversational
4-bit precision
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
2
Use this model
Will you convert DeepSeek-R1-Distill-Qwen-32B?
#2
by
bash99
- opened
Mar 13
Discussion
bash99
Mar 13
I'm a little wonder why only R1-Distill-Qwen-7B/14B is converted?
See translation
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Comment
·
Sign up
or
log in
to comment