File size: 448 Bytes
75f6ce0 c7e021f ec381c0 543ede3 48eec1b 3798936 75f6ce0 f35f2b3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
---
library_name: transformers
license: apache-2.0
datasets:
- benchang1110/pretrainedtw
- HuggingFaceTB/cosmopedia-100k
language:
- zh
widget:
- text: '在很久以前,這座島上'
example_title: Example1
---
# Model Card for Model ID
This is a continue-pretrained version of [Tinyllama](TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) tailored for traditional Chinese. The continue-pretraining dataset contains roughly 2B tokens. |