Yugo-GPT
Collection
Yugo-GPT class of LLM (45, 55, 60)
β’
13 items
β’
Updated
This Yugo45-GPT (7b) model has been fine-tuned on the Alpaca dataset using the gordicaleksa/YugoGPT as the zero ground base model.
Yugo45-GPT is a merge of the following models using LazyMergekit:
Special thanks for idea Stopwolf and this X post @TheStopwolf
slices:
- sources:
- model: datatab/YugoGPT-Alpaca-v1
layer_range: [0, 32]
- model: FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
layer_range: [0, 32]
merge_method: slerp
base_model: datatab/YugoGPT-Alpaca-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
# TBD
# TBD
Base model
NousResearch/Yarn-Mistral-7b-64k