Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ModelCloud
/
DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
like
6
Follow
ModelCloud.AI
48
Text Generation
Safetensors
qwen2
gptqmodel
modelcloud
chat
qwen
deepseek
instruct
int4
gptq
4bit
W4A16
conversational
4-bit precision
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Use this model
Will you convert DeepSeek-R1-Distill-Qwen-32B?
#2
by
bash99
- opened
11 days ago
Discussion
bash99
11 days ago
I'm a little wonder why only R1-Distill-Qwen-7B/14B is converted?
See translation
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Your need to confirm your account before you can post a new comment.
Comment
·
Sign up
or
log in
to comment