Llama-3.2-Instruct-3B-TIES

Overview

The Llama-3.2-Instruct-3B-TIES model is a result of merging three versions of Llama-3.2-3B models using the TIES merging method, facilitated by mergekit. This merge combines a base general-purpose language model with two instruction-tuned models to create a more powerful and versatile model capable of handling diverse tasks.

Model Details

Model Description

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: meta-llama/Llama-3.2-3B
    # Base model
  - model: meta-llama/Llama-3.2-3B-Instruct
    parameters:
      density: 0.5
      weight: 0.5
  - model: unsloth/Llama-3.2-3B-Instruct
    parameters:
      density: 0.5
      weight: 0.3
merge_method: ties
base_model: meta-llama/Llama-3.2-3B
parameters:
  normalize: true
dtype: float16
Downloads last month
90
Safetensors
Model size
1.85B params
Tensor type
F32
FP16
U8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for vhab10/Llama-3.2-Instruct-3B-TIES

Quantized
(55)
this model