⚠️ Note:
These model weights are for personal testing purposes only. The goal is to find a quantization method that achieves high compression while preserving as much of the model's original performance as possible. The current compression scheme may not be optimal, so please use these weights with caution.
Creation
This model was created by applying the bitsandbytes
and transformers
as presented in the code snipet below.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_name = "deepseek-ai/DeepSeek-V2-Lite"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4", # NF4 for weight
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16 # bnb supports bfloat16
)
bnb_model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
quantization_config=bnb_config
)
tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True)
bnb_model.push_to_hub("basicv8vc/DeepSeek-V2-Lite-bnb-4bit")
tokenizer.push_to_hub("basicv8vc/DeepSeek-V2-Lite-bnb-4bit")
Sources
- Downloads last month
- 76
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for basicv8vc/DeepSeek-V2-Lite-bnb-4bit
Base model
deepseek-ai/DeepSeek-V2-Lite