Model Details
Model Architecture:
urLLM-KO_EN-2.7B is an auto-regressive language model that leverages an optimized transformer architecture derived from princeton-nlp/Sheared-LLaMA-2.7B.
Training Corpus
The model was trained using selected datasets from Modu Corpus, Korean Wikipedia and Kaggle English News (approximately total 36GB).
Vocab Expansion
The expanded vocab size is 51385.
Model Card Contact
For errors or additional questions about details in this model card, contact [email protected] .
- Downloads last month
- 46
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.