Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ We have released the MgGPT family of large language models, which is a collectio
|
|
16 |
<!-- ## Model Developers -->
|
17 |
<!-- We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ) and the Shenzhen Research Institute of Big Data (SRIBD). -->
|
18 |
## Variations
|
19 |
-
MgGPT families come in a range of parameter sizes ——
|
20 |
<!-- ## Paper -->
|
21 |
<!-- The paper can be accessed at [link](https://huggingface.co/FreedomIntelligence/AceGPT-v2-70B-Chat/blob/main/Alignment_at_Pre_training__a_Case_Study_of_Aligning_LLMs_in_Arabic.pdf). -->
|
22 |
## Input
|
|
|
16 |
<!-- ## Model Developers -->
|
17 |
<!-- We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ) and the Shenzhen Research Institute of Big Data (SRIBD). -->
|
18 |
## Variations
|
19 |
+
MgGPT families come in a range of parameter sizes —— 7B, 8B, 13B, 32B and 70B, each size of model has a base category and a -chat category.
|
20 |
<!-- ## Paper -->
|
21 |
<!-- The paper can be accessed at [link](https://huggingface.co/FreedomIntelligence/AceGPT-v2-70B-Chat/blob/main/Alignment_at_Pre_training__a_Case_Study_of_Aligning_LLMs_in_Arabic.pdf). -->
|
22 |
## Input
|