huihui-ai commited on
Commit
4a64b36
·
verified ·
1 Parent(s): 7c78605

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,7 +13,7 @@ language:
13
  ---
14
  # MicroThinker-3B-Preview
15
 
16
- MicroThinker-3B-Preview, a new model fine-tuned from the [huihui-ai/Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.1-8B-Instruct-abliterated) model, focused on advancing AI reasoning capabilities.
17
 
18
  ## Use with ollama
19
 
@@ -58,14 +58,14 @@ huggingface-cli download --repo-type dataset huihui-ai/FineQwQ-142k --local-dir
58
  3. Used only the huihui-ai/FineQwQ-142k, Trained for 1 epoch:
59
 
60
  ```
61
- swift sft --model huihui-ai/Llama-3.1-8B-Instruct-abliterated --model_type llama3_1 --train_type lora --dataset "data/FineQwQ-142k/FineQwQ-142k.jsonl" --num_train_epochs 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --max_length 21710 --quant_bits 4 --bnb_4bit_compute_dtype bfloat16 --bnb_4bit_quant_storage bfloat16 --lora_rank 8 --lora_alpha 32 --gradient_checkpointing true --weight_decay 0.1 --learning_rate 1e-4 --gradient_accumulation_steps 16 --eval_steps 100 --save_steps 100 --logging_steps 20 --system "You are a helpful assistant. You should think step-by-step." --output_dir output/MicroThinker-3B-Preview/lora/sft --model_author "huihui-ai" --model_name "MicroThinker-3B-Preview"
62
  ```
63
 
64
  4. Save the final fine-tuned model. After you're done, input `exit` to exit.
65
  Replace the directories below with specific ones.
66
 
67
  ```
68
- swift infer --model huihui-ai/Llama-3.2-3B-Instruct-abliterated --adapters output/Llama-3.2-8B-Instruct-abliterated/lora/sft/v0-20250106-193759/checkpoint-8786 --stream true --merge_lora true
69
  ```
70
 
71
 
 
13
  ---
14
  # MicroThinker-3B-Preview
15
 
16
+ MicroThinker-3B-Preview, a new model fine-tuned from the [huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated) model, focused on advancing AI reasoning capabilities.
17
 
18
  ## Use with ollama
19
 
 
58
  3. Used only the huihui-ai/FineQwQ-142k, Trained for 1 epoch:
59
 
60
  ```
61
+ swift sft --model huihui-ai/Llama-3.1-8B-Instruct-abliterated --model_type llama3_1 --train_type lora --dataset "data/FineQwQ-142k/FineQwQ-142k.jsonl" --num_train_epochs 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --max_length 21710 --quant_bits 4 --bnb_4bit_compute_dtype bfloat16 --bnb_4bit_quant_storage bfloat16 --lora_rank 8 --lora_alpha 32 --gradient_checkpointing true --weight_decay 0.1 --learning_rate 1e-4 --gradient_accumulation_steps 16 --eval_steps 100 --save_steps 100 --logging_steps 20 --system "You are a helpful assistant. You should think step-by-step." --output_dir output/MicroThinker-8B-Preview/lora/sft --model_author "huihui-ai" --model_name "MicroThinker-8B-Preview"
62
  ```
63
 
64
  4. Save the final fine-tuned model. After you're done, input `exit` to exit.
65
  Replace the directories below with specific ones.
66
 
67
  ```
68
+ swift infer --model huihui-ai/Llama-3.1-8B-Instruct-abliterated --adapters output/Llama-3.1-8B-Instruct-abliterated/lora/sft/v0-20250119-175713/checkpoint-19500 --stream true --merge_lora true
69
  ```
70
 
71