Llama3-8B-Fusion-ChatQA-Chinese

Llama3-8B-Fusion-ChatQA-Chinese is a merge of the following models using mergekit:

Configuration

```yaml models:

  • model: meta-llama/Meta-Llama-3-8B
  • model: shenzhi-wang/Llama3-8B-Chinese-Chat parameters: density: 0.5 weight: 0.6
  • model: nvidia/Llama3-ChatQA-1.5-8B parameters: density: 0.5 weight: 0.4 merge_method: dare_ties base_model: meta-llama/Meta-Llama-3-8B parameters: int8_mask: true dtype: bfloat16 ```
Downloads last month
9
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.