File size: 952 Bytes
2a04626
 
 
 
4a401da
 
 
 
 
 
 
 
 
 
 
 
 
2a04626
 
 
 
 
 
 
 
 
 
 
 
b7b6af7
 
 
0dbf4ae
 
b7b6af7
0dbf4ae
b7b6af7
 
 
2a04626
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
library_name: peft
---

## Dataset procedure
- GPT4-generated dataset
- size: 80
- per_device_train_batch_size=4,
- gradient_accumulation_steps=4,
- warmup_steps=100,
- max_steps=200,
- learning_rate=2e-4,
- fp16=True,
- logging_steps=1,
- output_dir='outputs',

## Training procedure

The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32


## LoraConfig procedure
    r=16, #attention heads
    lora_alpha=32, #alpha scaling
    # target_modules=["q_proj", "v_proj"], #if you know the
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM" # set this for CLM or Seq2Seq

### Framework versions


- PEFT 0.6.0.dev0