--- library_name: transformers license: apache-2.0 datasets: - benchang1110/pretrainedtw - HuggingFaceTB/cosmopedia-100k language: - zh widget: - text: '從前,' example_title: Example1 --- # Model Card for Model ID This is a continue-pretrained version of [Tinyllama](TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) tailored for traditional Chinese. The continue-pretraining dataset contains roughly 2B tokens.