Text Generation
PyTorch
Safetensors
English
gpt2
t1101675 commited on
Commit
1dc32f9
·
verified ·
1 Parent(s): 9df13cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -3
README.md CHANGED
@@ -1,3 +1,46 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - databricks/databricks-dolly-15k
5
+ language:
6
+ - en
7
+ metrics:
8
+ - rouge
9
+ base_model:
10
+ - openai-community/gpt2-medium
11
+ pipeline_tag: text-generation
12
+ ---
13
+
14
+ # MiniLLM-gpt2-340M
15
+
16
+ [paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm)
17
+
18
+ **MiniLLM-gpt2-340M** is a gpt2-medium (340M) model distilled from [gpt2-xlarge (1.5B)](https://huggingface.co/MiniLLM/teacher-gpt2-1.5B) on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k)
19
+
20
+ <p align='left'>
21
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/7hBWGZzYMJihCRQ70XoiQ.png" width="1000">
22
+ </p>
23
+
24
+ **Note**: MiniLLM requires a [SFT model](https://huggingface.co/MiniLLM/init-gpt2-340M) for initilization to perform the PPO optimization.
25
+
26
+ ## Evaluation
27
+
28
+ We ask GPT-4 to give scores for the generated responses of MiniLLM. The prompts are taken from [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) (test set), [self-instruct](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json), and [vicuna](https://github.com/lm-sys/vicuna-blog-eval)
29
+
30
+ <p align='left'>
31
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/rDXnaDbKH5mBYAmqGC-_a.png" width="1000">
32
+ </p>
33
+ ## Baseline Models
34
+ + [SFT w/o KD](https://huggingface.co/MiniLLM/SFT-gpt2-340M)
35
+ + [KD](https://huggingface.co/MiniLLM/KD-gpt2-340M)
36
+ + [SeqKD](https://huggingface.co/MiniLLM/SeqKD-gpt2-340M)
37
+
38
+ ## Citation
39
+ ```
40
+ @inproceedings{minillm,
41
+ title={MiniLLM: Knowledge Distillation of Large Language Models},
42
+ author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
43
+ booktitle={Proceedings of ICLR},
44
+ year={2024}
45
+ }
46
+ ```