File size: 1,075 Bytes
992bb7a 5f31114 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
---
base_model:
- allenai/Llama-3.1-Tulu-3.1-8B
- sh2orc/Llama-3.1-Korean-8B-Instruct
- cognitivecomputations/Dolphin3.0-Llama3.1-8B
library_name: transformers
tags:
- mergekit
- merge
---
# Deep-Llama-3.1-KoEn-8B-SiSai
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [allenai/Llama-3.1-Tulu-3.1-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3.1-8B) as a base.
### Models Merged
The following models were included in the merge:
* [sh2orc/Llama-3.1-Korean-8B-Instruct](https://huggingface.co/sh2orc/Llama-3.1-Korean-8B-Instruct)
* [cognitivecomputations/Dolphin3.0-Llama3.1-8B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Llama3.1-8B)
### Models Strength
Deep-Llama-3.1-KoEn-8B-SiSai is a Korean-English hybrid model with strong reasoning, instruction-following, and bilingual capabilities. The integration of Dolphin 3.0 ensures high-level inference, making it ideal for complex question-answering, professional translations, and deep analytical reasoning tasks. 🚀
|