nielsr HF staff commited on
Commit
8c2b606
·
verified ·
1 Parent(s): 4f82cba

Add pipeline tag and improve model card

Browse files

This PR adds the `pipeline_tag: text-generation` metadata, making the model discoverable via the Hub's filter and search functionalities. It also adds a more descriptive title to the model card, clarifies the AlpacaEval results section, and provides a link to the code.

Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -1,19 +1,17 @@
1
  ---
2
- library_name: transformers
3
- license: mit
4
  datasets:
5
  - HuggingFaceH4/ultrafeedback_binarized
6
  language:
7
  - en
 
 
 
8
  ---
9
 
10
- <!-- This is a model released from the preprint: *[Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760)*. Please refer to our [repository](https://github.com/sail-sg/dice) for more details. -->
11
-
12
  # Llama-3-Base-8B-DICE-Iter1
13
 
14
  This model was developed using [Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760) (DICE) at iteration 1, based on the [princeton-nlp/Llama-3-Base-8B-SFT-DPO](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT-DPO) architecture as the starting point.
15
 
16
- <!-- We utilized the prompt sets extracted from [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). -->
17
 
18
  ## Links to Other Models
19
  - [Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1)
@@ -26,7 +24,9 @@ This model was developed using [Bootstrapping Language Models with DPO Implicit
26
  - License: MIT
27
  - Fine-tuned from model: princeton-nlp/Llama-3-Base-8B-SFT-DPO
28
 
29
- ## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/)
 
 
30
 
31
  | Model | LC. Win Rate | Win Rate |
32
  |-------------------------------------------|:------------:|:--------:|
@@ -34,6 +34,8 @@ This model was developed using [Bootstrapping Language Models with DPO Implicit
34
  |[Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1) |25.08 |25.77
35
  |[Llama-3-Base-8B-DICE-Iter2](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter2) |**27.55** |**30.99**
36
 
 
 
37
  ## Citation
38
 
39
  ```bibtex
@@ -43,4 +45,6 @@ This model was developed using [Bootstrapping Language Models with DPO Implicit
43
  journal={arXiv preprint arXiv:2406.09760},
44
  year={2024}
45
  }
46
- ```
 
 
 
1
  ---
 
 
2
  datasets:
3
  - HuggingFaceH4/ultrafeedback_binarized
4
  language:
5
  - en
6
+ library_name: transformers
7
+ license: mit
8
+ pipeline_tag: text-generation
9
  ---
10
 
 
 
11
  # Llama-3-Base-8B-DICE-Iter1
12
 
13
  This model was developed using [Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760) (DICE) at iteration 1, based on the [princeton-nlp/Llama-3-Base-8B-SFT-DPO](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT-DPO) architecture as the starting point.
14
 
 
15
 
16
  ## Links to Other Models
17
  - [Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1)
 
24
  - License: MIT
25
  - Fine-tuned from model: princeton-nlp/Llama-3-Base-8B-SFT-DPO
26
 
27
+ ## AlpacaEval Leaderboard Evaluation Results
28
+
29
+ The following table shows the AlpacaEval leaderboard evaluation results for this model and related models:
30
 
31
  | Model | LC. Win Rate | Win Rate |
32
  |-------------------------------------------|:------------:|:--------:|
 
34
  |[Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1) |25.08 |25.77
35
  |[Llama-3-Base-8B-DICE-Iter2](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter2) |**27.55** |**30.99**
36
 
37
+ **(LC = Length Controlled, WR = Win Rate)**
38
+
39
  ## Citation
40
 
41
  ```bibtex
 
45
  journal={arXiv preprint arXiv:2406.09760},
46
  year={2024}
47
  }
48
+ ```
49
+
50
+ Code: https://github.com/sail-sg/dice