Problems fine tuning with PEFT

#4
by charlescearl - opened

I tried to fine-tune the deepseek-coder-7b-instruct-v1.5 model on A10 (AWS g5.12xlarge) using 4bit quantization with bitsandbytes. My bitsandbytes configuration:

bnb_config = BitsAndBytesConfig(
            load_in_4bit=True,
            bnb_4bit_use_double_quant=True,
            bnb_4bit_quant_type="nf4",
            bnb_4bit_compute_dtype=torch.bfloat16,
            gradient_checkpointing=True,
        )

with device_map="auto" for loading.

I kept running into the error message:

You can't train a model that has been loaded
in 8-bit precision on a different device than the one you're training
on.

However both 8bit bitsandbytes loading and single GPU blow my GPU memory. Any suggestions?

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment