YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties - GGUF

Name Quant method Size
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q2_K.gguf Q2_K 0.54GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.IQ3_XS.gguf IQ3_XS 0.58GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.IQ3_S.gguf IQ3_S 0.6GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q3_K_S.gguf Q3_K_S 0.6GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.IQ3_M.gguf IQ3_M 0.61GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q3_K.gguf Q3_K 0.64GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q3_K_M.gguf Q3_K_M 0.64GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q3_K_L.gguf Q3_K_L 0.68GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.IQ4_XS.gguf IQ4_XS 0.7GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q4_0.gguf Q4_0 0.72GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.IQ4_NL.gguf IQ4_NL 0.72GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q4_K_S.gguf Q4_K_S 0.72GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q4_K.gguf Q4_K 0.75GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q4_K_M.gguf Q4_K_M 0.75GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q4_1.gguf Q4_1 0.77GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q5_0.gguf Q5_0 0.83GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q5_K_S.gguf Q5_K_S 0.83GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q5_K.gguf Q5_K 0.85GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q5_K_M.gguf Q5_K_M 0.85GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q5_1.gguf Q5_1 0.89GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q6_K.gguf Q6_K 0.95GB
Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties.Q8_0.gguf Q8_0 1.23GB

Original model description:

base_model:

  • ank028/Llama-3.2-1B-Instruct-medmcqa
  • meta-llama/Llama-3.2-1B-Instruct
  • autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1 library_name: transformers tags:
  • mergekit
  • merge

m_l

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using meta-llama/Llama-3.2-1B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ank028/Llama-3.2-1B-Instruct-medmcqa
    parameters:
      density: 0.5 # density gradient
      weight: 1.0
  - model: autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1
    parameters:
      density: 0.5
      weight: 0.5 # weight gradient
merge_method: ties
base_model: meta-llama/Llama-3.2-1B-Instruct
parameters:
  normalize: true
  int8_mask: false
dtype: float16
name: Llama-3.2-1B-Instruct-medmcqa-MGSM8K-sft1-ties
Downloads last month
432
GGUF
Model size
1.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.