File size: 2,660 Bytes
cc13823
 
 
32efea7
cc13823
32efea7
cc13823
32efea7
cc13823
32efea7
8f9e41e
 
32efea7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa2cd99
32efea7
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: apache-2.0
datasets:
  - duongttr/vi-dataset-for-pretrain
language:
  - vi
metrics:
  - perplexity
pipeline_tag: text-generation
widget:
  - text: Hôm nay tôi rất vui  
  - text: Hoàng Sa, Trường Sa  của Việt 
model-index:
  - name: chronopt-research/vietnamese-gpt2-base
    results:
      - task:
          type: text-generation
        metrics:
          - type: perplexity
            value: 51.35
            verified: true
---
# Vietnamese `gpt2-base`

<!-- Provide a quick summary of what the model is/does. -->

This is a pretrained `gpt2-base` for Vietnamese language using casual language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).

## Model Description
GPT-2 (*at first*) is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.

This is the **base version** of GPT-2, with 137M parameters.

You could've found other pretrained version from here: [gpt2-medium](https://huggingface.co/chronopt-research/vietnamese-gpt2-medium), [gpt2-large]()

## Dataset used for pretraining
This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.

The dataset consists of:
- [`vietgpt/covid_19_news_vi`](https://huggingface.co/datasets/vietgpt/covid_19_news_vi)
- [`hieunguyen1053/binhvq-news-corpus`](https://huggingface.co/datasets/hieunguyen1053/binhvq-news-corpus)
- [`oscar (unshuffled_deduplicated_vi)`](https://huggingface.co/datasets/oscar)
- [`vietgpt/wikipedia_vi`](https://huggingface.co/datasets/vietgpt/wikipedia_vi)

You can find out the combined version here: [duongttr/vi-dataset-for-pretrain](https://huggingface.co/datasets/duongttr/vi-dataset-for-pretrain)

## Hyperparamters & Results
We trained the model ~100k steps, with `lr=1e-4`, `bs=2560` (`single_batch_size=32` * `num_core=8` * `grad_cum=10`), `optimizer=adamw` on TPU-VM-3.8 from [TRC Program](https://sites.research.google/trc/about/). The training costs around **1 day**.
|Model|Eval Loss|Eval Perplexity|
|---|---|---|
|**gpt2-base**|**3.939**|**51.35**|
|gpt2-medium|2.8676|17.5948|
|gpt2-large|-|-|

## Contacts
Feel free to contact us via: [email]()