duongttr commited on
Commit
32efea7
·
1 Parent(s): cc13823

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -4
README.md CHANGED
@@ -1,10 +1,58 @@
1
  ---
2
  license: apache-2.0
3
  datasets:
4
- - duongttr/vi-dataset-for-pretrain
5
  language:
6
- - vi
7
  metrics:
8
- - perplexity
9
  pipeline_tag: text-generation
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  datasets:
4
+ - duongttr/vi-dataset-for-pretrain
5
  language:
6
+ - vi
7
  metrics:
8
+ - perplexity
9
  pipeline_tag: text-generation
10
+ widget:
11
+ - text: Việt Nam là quốc gia có
12
+ - text: Hoàng Sa, Trường Sa là của
13
+ model-index:
14
+ - name: chronopt-research/vietnamese-gpt2-base
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ metrics:
19
+ - type: perplexity
20
+ value: 51.35
21
+ verified: true
22
+ ---
23
+ # Vietnamese `gpt2-base`
24
+
25
+ <!-- Provide a quick summary of what the model is/does. -->
26
+
27
+ This is a pretrained `gpt2-base` for Vietnamese language using casual language modeling (CLM) objective. It was introduced in
28
+ [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
29
+ and first released at [this page](https://openai.com/blog/better-language-models/).
30
+
31
+ ## Model Description
32
+ GPT-2 (*at first*) is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
33
+
34
+ This is the **base version** of GPT-2, with 137M parameters.
35
+
36
+ You could've found other pretrained version from here: [gpt2-medium](https://huggingface.co/chronopt-research/vietnamese-gpt2-medium), [gpt2-large]()
37
+
38
+ ## Dataset used for pretraining
39
+ This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.
40
+
41
+ The dataset consists of:
42
+ - [`vietgpt/covid_19_news_vi`](https://huggingface.co/datasets/vietgpt/covid_19_news_vi)
43
+ - [`hieunguyen1053/binhvq-news-corpus`](https://huggingface.co/datasets/hieunguyen1053/binhvq-news-corpus)
44
+ - [`oscar (unshuffled_deduplicated_vi)`](https://huggingface.co/datasets/oscar)
45
+ - [`vietgpt/wikipedia_vi`](https://huggingface.co/datasets/vietgpt/wikipedia_vi)
46
+
47
+ You can find out the combined version here: [duongttr/vi-dataset-for-pretrain](https://huggingface.co/datasets/duongttr/vi-dataset-for-pretrain)
48
+
49
+ ## Hyperparamters & Results
50
+ We trained the model ~100k steps, with `lr=1e-4`, `bs=1920`, `optimizer=adamw` on TPU-VM-3.8 from [TRC Program](https://sites.research.google/trc/about/). The training costs around **2.5 days**.
51
+ |Model|Eval Loss|Eval Perplexity|
52
+ |---|---|---|
53
+ |**gpt2-base**|**3.939**|**51.35**|
54
+ |gpt2-medium|2.8676|17.5948|
55
+ |gpt2-large|-|-|
56
+
57
+ ## Contacts
58
+ Feel free to contact us via: [email]()