Update README.md
Browse files
README.md
CHANGED
@@ -58,7 +58,7 @@ huggingface-cli download --repo-type dataset huihui-ai/FineQwQ-142k --local-dir
|
|
58 |
3. Used only the huihui-ai/FineQwQ-142k, Trained for 1 epoch:
|
59 |
|
60 |
```
|
61 |
-
swift sft --model huihui-ai/Llama-3.1-8B-Instruct-abliterated --model_type llama3_1 --train_type lora --dataset "data/FineQwQ-142k/FineQwQ-142k.jsonl" --num_train_epochs 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --max_length 21710 --quant_bits 4 --bnb_4bit_compute_dtype bfloat16 --bnb_4bit_quant_storage bfloat16 --lora_rank 8 --lora_alpha 32 --gradient_checkpointing true --weight_decay 0.1 --learning_rate 1e-4 --gradient_accumulation_steps 16 --eval_steps
|
62 |
```
|
63 |
|
64 |
4. Save the final fine-tuned model. After you're done, input `exit` to exit.
|
|
|
58 |
3. Used only the huihui-ai/FineQwQ-142k, Trained for 1 epoch:
|
59 |
|
60 |
```
|
61 |
+
swift sft --model huihui-ai/Llama-3.1-8B-Instruct-abliterated --model_type llama3_1 --train_type lora --dataset "data/FineQwQ-142k/FineQwQ-142k.jsonl" --num_train_epochs 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --max_length 21710 --quant_bits 4 --bnb_4bit_compute_dtype bfloat16 --bnb_4bit_quant_storage bfloat16 --lora_rank 8 --lora_alpha 32 --gradient_checkpointing true --weight_decay 0.1 --learning_rate 1e-4 --gradient_accumulation_steps 16 --eval_steps 500 --save_steps 500 --logging_steps 100 --system "You are a helpful assistant. You should think step-by-step." --output_dir output/MicroThinker-8B-Preview/lora/sft --model_author "huihui-ai" --model_name "MicroThinker-8B-Preview"
|
62 |
```
|
63 |
|
64 |
4. Save the final fine-tuned model. After you're done, input `exit` to exit.
|