File size: 293 Bytes
b875633 9f20c01 |
1 2 3 4 5 6 7 |
---
license: mit
---
This lora model is trained on a combination of 50MB datasets containing various conversations mainly from GPT4.
The model shows a clear overfitting after 6 epochs.
The base model is decapoda-research/llama-7b-hf
You can use https://github.com/tloen/alpaca-lora to run it. |