matthieumeeus97 commited on
Commit
001e6b4
·
verified ·
1 Parent(s): 5c241c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +164 -7
README.md CHANGED
@@ -1,15 +1,172 @@
1
  ---
2
- license: llama2
3
  language:
4
  - nl
 
5
  ---
6
 
7
- ## LLaMA-2-NL: Fine-tuned using LoRa and a translated tokenizer
 
 
 
 
 
 
8
 
9
- ```
 
 
 
 
 
 
 
 
 
 
 
 
10
  from transformers import AutoModelForCausalLM, AutoTokenizer
11
 
12
- # take the new tokenizer
13
- tokenizer = AutoTokenizer.from_pretrained('llama-2-nl/Llama-2-7b-hf-lora-tokentrans')
14
- model = AutoModelForCausalLM.from_pretrained('llama-2-nl/Llama-2-7b-hf-lora-tokentrans')
15
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - nl
4
+ license: llama2
5
  ---
6
 
7
+ <p align="center" style="margin:0;padding:0">
8
+ <img src="./chocollama_logo.png" alt="ChocoLlama logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
9
+ </p>
10
+ <div style="margin:auto; text-align:center">
11
+ <h1 style="margin-bottom: 0">ChocoLlama</h1>
12
+ <em>A Llama-2/3-based family of Dutch language models</em>
13
+ </div>
14
 
15
+ ## ChocoLlama-2-7B-tokentrans-base: Getting Started
16
+
17
+ We here present **ChocoLlama-2-7B-tokentrans-base**, a language-adapted version of Meta's Llama-2-7b.
18
+ We replace the original tokenizer by a RoBERTa-based tokenizer trained only on Dutch data.
19
+ The token embeddings of the original Llama-2 model were then reinitialized using the token translation algorithm proposed by [Remy et al.](https://arxiv.org/pdf/2310.03477).
20
+ The model was subsequently fine-tuned on 32B Dutch Llama-2 tokens (104GB) using LoRa.
21
+ Using the new, Dutch-specific tokenizer, the total number of tokens is reduced to 22.6B, or a reduction of 29.4% compared to the original Llama-2 tokenizer.
22
+ Note that this is a base model, not optimized for conversational behavior.
23
+ If this is desired for your use-case, we recommend finetuning this model on your own Dutch data or using the instruction-finetuned version of this model, [ChocoLlama-2-7B-tokentrans-instruct](https://huggingface.co/ChocoLlama/ChocoLlama-2-7B-tokentrans-instruct).
24
+
25
+ Use the code below to get started with the model.
26
+
27
+ ```python
28
  from transformers import AutoModelForCausalLM, AutoTokenizer
29
 
30
+ tokenizer = AutoTokenizer.from_pretrained('ChocoLlama/ChocoLlama-2-7B-tokentrans-base')
31
+ model = AutoModelForCausalLM.from_pretrained('ChocoLlama/ChocoLlama-2-7B-tokentrans-base')
32
+ ```
33
+
34
+ ## Model Details
35
+
36
+ ChocoLlama is a family of open LLM's specifically adapted to Dutch, contributing to the state-of-the-art of Dutch open LLM's in their weight class.
37
+
38
+ We provide 6 variants (of which 3 base and 3 instruction-tuned models):
39
+ - **ChocoLlama-2-7B-base** ([link](https://huggingface.co/ChocoLlama/ChocoLlama-2-7B-base)): A language-adapted version of Meta's Llama-2-7b, fine-tuned on 32B Dutch Llama-2 tokens (104GB) using LoRa.
40
+ - **ChocoLlama-2-7B-instruct** ([link](https://huggingface.co/ChocoLlama/ChocoLlama-2-7B-instruct)): An instruction-tuned version of ChocoLlama-2-7B-base, fine-tuned on a collection of Dutch translations of instruction-tuning datasets, using SFT followed by DPO.
41
+ - **ChocoLlama-2-7B-tokentrans-base** ([link](https://huggingface.co/ChocoLlama/ChocoLlama-2-7B-tokentrans-base)): A language-adapted version of Meta's Llama-2-7b, using a Dutch RoBERTa-based tokenizer. The token embeddings of this model were reinitialized using the token translation algorithm proposed by [Remy et al.](https://arxiv.org/pdf/2310.03477). The model was subsequently fine-tuned on the same Dutch dataset as ChocoLlama-2-7B-base, again using LoRa.
42
+ - **ChocoLlama-2-7B-tokentrans-instruct** ([link](https://huggingface.co/ChocoLlama/ChocoLlama-2-7B-tokentrans-instruct)): An instruction-tuned version of ChocoLlama-2-7B-tokentrans-base, fine-tuned on the same dataset as ChocoLlama-2-7B-instruct, again using SFT followed by DPO.
43
+ - **Llama-3-ChocoLlama-8B-base** ([link](https://huggingface.co/ChocoLlama/Llama-3-ChocoLlama-8B-base)): A language-adapted version of Meta's Llama-8-8B, fine-tuned on the same Dutch dataset as ChocoLlama-2-7B-base, again using LoRa.
44
+ - **Llama-3-ChocoLlama-instruct** ([link](https://huggingface.co/ChocoLlama/Llama-3-ChocoLlama-8B-instruct)): An instruction-tuned version of Llama-3-ChocoLlama-8B-base, fine-tuned on the same dataset as ChocoLlama-2-7B-instruct, again using SFT followed by DPO.
45
+
46
+ For benchmark results for all models, including compared to their base models and other Dutch LLMs, we refer to our paper [here](some_url).
47
+
48
+ ### Model Description
49
+
50
+ - **Developed by:** [Matthieu Meeus](https://huggingface.co/matthieumeeus97), [Anthony Rathé](https://huggingface.co/anthonyrathe)
51
+ - **Funded by:** [Vlaams Supercomputer Centrum](https://www.vscentrum.be/), through a grant of apx. 40K GPU hours (NVIDIA A100-80GB)
52
+ - **Language(s):** Dutch
53
+ - **License:** [Llama-2 Community License](https://ai.meta.com/llama/license/)
54
+ - **Finetuned from model:** [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
55
+
56
+ ### Model Sources
57
+
58
+ - **Repository:** Will be released soon.
59
+ - **Paper:** Will be released soon.
60
+
61
+ ## Uses
62
+
63
+ ### Direct Use
64
+
65
+ Since this is a base model, we do not recommend using it for your use-cases directly. We instead recommend:
66
+ 1. Fine-tuning this model to your specific use-case
67
+ 2. Leveraging the instruction-tuned version of this model
68
+
69
+ ### Downstream Use
70
+
71
+ Since this model is a base model, it can easily be adapted to specific use-cases that required Dutch language understanding and generation.
72
+ We expect this model to be particularly useful for use-cases in the domains which were explicitly covered in our dataset, e.g. the analysis and/or generation of Dutch job descriptions, corporate filings and legislation.
73
+
74
+ ### Out-of-Scope Use
75
+
76
+ - Use-cases requiring a chat-style interface: since this is a base model, it cannot be used reliably for turn-based chat interaction. Please refer to the instruction-tuned version of this model instead.
77
+ - Use-cases requiring understanding or generation of text in languages other than Dutch: the dataset on which this model was fine-tuned does not contain data in languages other than Dutch, hence we expect significant catastrophic forgetting to have occured for English, which is the language Llama-2 was originally trained for.
78
+
79
+ ## Bias, Risks, and Limitations
80
+
81
+ We have taken care to include only widely used and high-quality data in our dataset. Some of this data has been filtered by the original creators.
82
+ However we did not explicitly conduct any additional filtering of this dataset with regards to biased or otherwise harmful content.
83
+
84
+ ### Recommendations
85
+
86
+ We recommend fine-tuning this model to your curated data to maximally avoid undesirable outputs.
87
+
88
+ ## Training Details
89
+
90
+ ### Training Data
91
+
92
+ We collect a diverse set of Dutch natural language.
93
+
94
+ 1. **OSCAR**
95
+ The bulk of our data comes from the Dutch portion of [OSCAR](https://oscar-corpus.com), January 2023 version, based on Common Crawl. This dataset includes **93 GB** of text (~28.6B tokens).
96
+
97
+ 2. **Open Subtitles**
98
+ We collected Dutch text from movie subtitles, focusing on unique movies either in Dutch or with Dutch subtitles. This dataset contains **5 GB** of text (~1.54B tokens) from **214k samples**.
99
+
100
+ 3. **Project Gutenberg**
101
+ We downloaded **970 full Dutch books** from [Project Gutenberg](https://www.gutenberg.org) using a public scraper. The dataset includes **0.3 GB** of text (~92M tokens) and is available on [Hugging Face](https://huggingface.co/datasets/ChocoLlama/gutenberg-dutch).
102
+
103
+ 4. **Wikipedia**
104
+ Using the March 2023 [Wikipedia dump](https://dumps.wikimedia.org), we included **2.5 GB** of text (~769M tokens). Despite some duplication with OSCAR, Wikipedia's high quality justifies its inclusion.
105
+
106
+ 5. **Job Descriptions (TechWolf)**
107
+ A sample of **750k Dutch job descriptions** collected over five years from public websites, provided by TechWolf. This dataset contains **1.5 GB** of text (~462M tokens).
108
+
109
+ 6. **Staatsblad (Bizzy)**
110
+ A sample of **80k legal filings** from [Het Belgisch Staatsblad](https://www.ejustice.just.fgov.be/cgi/welcome.pl). Documents were OCR-processed, and personal data was excluded. This dataset includes **1.4 GB** of text (~431M tokens), collected with help from Bizzy.
111
+
112
+ 7. **Legislation (ML6)**
113
+ **15k documents** from Flemish legislation accessed via the [Open Data API](https://www.vlaanderen.be/vlaams-parlement/de-vlaamse-codex). This dataset contains **0.2 GB** of text (~62M tokens), collected with support from ML6.
114
+
115
+ ### Training Procedure
116
+
117
+ This model was fine-tuned using low-rank (LoRa) adapatation with trainable embeddings, for a total of 839M trainable parameters.
118
+
119
+ #### Training Hyperparameters
120
+
121
+ - **Training regime:** bf16 non-mixed precision
122
+ - **Epochs:** 1
123
+ - **LoRa parameters:**
124
+ - R: 8
125
+ - Alpha: 32
126
+ - Trainable modules: q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj, embed_tokens, lm_head
127
+ - LoRa dropout: 0.05
128
+ - **Learning Rate:**
129
+ - Scheduler: StepLR
130
+ - Step size: 6212
131
+ - Learning rate: 0.0003
132
+ - Gamma: 0.85
133
+ - **Other parameters:**
134
+ - Minibatch size: 16
135
+ - Gradient accumulation steps: 8
136
+ - Parallelization factor: 8
137
+ - Weight decay: 0
138
+
139
+ ## Evaluation
140
+
141
+ ### Quantitative evaluation
142
+
143
+ We have evaluated our models on several industry-standard Dutch benchmarks, translated from their original versions. The results can be found in the table below, together with results from several other prominent Dutch models.
144
+
145
+ | Model | ARC | HellaSwag | MMLU | TruthfulQA | Avg. |
146
+ |----------------------------------------------|----------------|----------------|----------------|----------------|----------------|
147
+ | **Llama-3-ChocoLlama-instruct** | **0.48** | **0.66** | **0.49** | **0.49** | **0.53** |
148
+ | llama-3-8B-rebatch | 0.44 | 0.64 | 0.46 | 0.48 | 0.51 |
149
+ | llama-3-8B-instruct | 0.47 | 0.59 | 0.47 | 0.52 | 0.51 |
150
+ | llama-3-8B | 0.44 | 0.64 | 0.47 | 0.45 | 0.5 |
151
+ | Reynaerde-7B-Chat | 0.44 | 0.62 | 0.39 | 0.52 | 0.49 |
152
+ | **Llama-3-ChocoLlama-base** | **0.45** | **0.64** | **0.44** | **0.44** | **0.49** |
153
+ | zephyr-7b-beta | 0.43 | 0.58 | 0.43 | 0.53 | 0.49 |
154
+ | geitje-7b-ultra | 0.40 | 0.66 | 0.36 | 0.49 | 0.48 |
155
+ | **ChocoLlama-2-7B-tokentrans-instruct** | **0.45** | **0.62** | **0.34** | **0.42** | **0.46** |
156
+ | mistral-7b-v0.1 | 0.43 | 0.58 | 0.37 | 0.45 | 0.46 |
157
+ | **ChocoLlama-2-7B-tokentrans-base** | **0.42** | **0.61** | **0.32** | **0.43** | **0.45** |
158
+ | **ChocoLlama-2-7B-instruct** | **0.36** | **0.57** | **0.33** | **0.45** | **0.43 |
159
+ | **ChocoLlama-2-7B-base** | **0.35** | **0.56** | **0.31** | **0.43** | **0.41** |
160
+ | llama-2-7b-chat-hf | 0.36 | 0.49 | 0.33 | 0.44 | 0.41 |
161
+ | llama-2-7b-hf | 0.36 | 0.51 | 0.32 | 0.41 | 0.40 |
162
+
163
+ On average, Llama-3-ChocoLlama-instruct surpasses the previous state-of-the-art on these benchmarks.
164
+
165
+ ### Qualitative evaluation
166
+
167
+ In our paper, we also provide an additional qualitative evaluation of all models - which we empirically find more reliable.
168
+ For details, we refer to the paper and to our benchmark [ChocoLlama-Bench](https://huggingface.co/datasets/ChocoLlama/ChocoLlama-Bench).
169
+
170
+ ### Compute Infrastructure
171
+
172
+ All ChocoLlama models have been trained on the compute cluster provided by the [Flemish Supercomputer Center (VSC)](https://www.vscentrum.be/). We used 8 to 16 NVIDIA A100 GPU's with 80 GB of VRAM.