yueliu1999 commited on
Commit
38d2b36
·
verified ·
1 Parent(s): 23c3b0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -42
README.md CHANGED
@@ -11,50 +11,9 @@ model-index:
11
  results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
  # GuardReasoner 8B
18
 
19
- This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the 4_1_WildGuardTrainCotDpoSelf8BMix3Weight, the 4_2_AegisTrainCotDpoSelf8BMix248Weight, the 4_3_BeaverTailsTrainCotDpoSelf8BMix248Weight and the 4_4_ToxicChatTrainCotDpoSelf8BMix248Weight datasets.
20
 
21
- ## Model description
22
 
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
- ## Training procedure
34
-
35
- ### Training hyperparameters
36
-
37
- The following hyperparameters were used during training:
38
- - learning_rate: 5e-06
39
- - train_batch_size: 1
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - distributed_type: multi-GPU
43
- - num_devices: 4
44
- - gradient_accumulation_steps: 64
45
- - total_train_batch_size: 256
46
- - total_eval_batch_size: 32
47
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
- - lr_scheduler_type: cosine
49
- - num_epochs: 2.0
50
-
51
- ### Training results
52
-
53
-
54
-
55
- ### Framework versions
56
-
57
- - Transformers 4.46.1
58
- - Pytorch 2.5.1+cu124
59
- - Datasets 3.1.0
60
- - Tokenizers 0.20.3
 
11
  results: []
12
  ---
13
 
 
 
14
 
15
  # GuardReasoner 8B
16
 
17
+ This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) via R-SFT and HS-DPO.
18
 
 
19