I know there are a lot of Progenitor models now, but alas the quest for perfection is endless. My goal with V4 was to retain as much of the smartness from the Llama 3.3 instruct model, which I was using as my base, but try to get it a little more uncensored. This prompted me to create a new experimental base model which I used for this merge.

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear DELLA merge method using TareksLab/Experimental-Base-V1-bf16 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Sao10K/L3.1-70B-Hanami-x1
    parameters:
      weight: 0.20
      density: 0.7
  - model: Sao10K/70B-L3.3-Cirrus-x1
    parameters:
      weight: 0.20
      density: 0.7
  - model: SicariusSicariiStuff/Negative_LLAMA_70B
    parameters:
      weight: 0.20
      density: 0.7
  - model: TheDrummer/Anubis-70B-v1
    parameters:
      weight: 0.20
      density: 0.7
  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
    parameters:
      weight: 0.20
      density: 0.7
merge_method: della_linear
base_model: TareksLab/Experimental-Base-V1-bf16
parameters:
  epsilon: 0.2
  lambda: 1.1
dype: float32
out_dtype: bfloat16
tokenizer:
 source: SicariusSicariiStuff/Negative_LLAMA_70B
Downloads last month
0
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Tarek07/Progenitor-V4-LLaMa-70B