ATMa

Asymmetrically Tuned Matrix

This model is a very mid finetune of microsoft/Phi-3-medium-128k-instruct

Layers 1 through 15 were finetuned on one private dataset and then a LoRA of a different but similar and larger dataset was trained/applied to the entire model with a scaling factor of 1:4.

The results are mixed and it's hard to find a good use-case for this model.

All of the original scripts and code have been included in this repo.

Trained using qlora-pipe

Downloads last month
13
Safetensors
Model size
14B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.