Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license: apache-2.0
|
|
5 |
# LimaRP-Llama2-7B-v3 (Alpaca, experimental, 4-bit LoRA adapter)
|
6 |
|
7 |
This is an experimental version of LimaRP for Llama2 with an updated dataset (1800 training samples)
|
8 |
-
and a 2-pass training procedure. The first pass includes unsupervised
|
9 |
4k tokens length and the second pass is LimaRP with changes introducing more effective control on response length.
|
10 |
|
11 |
For more details about LimaRP, see the model page for the [previously released version](https://huggingface.co/lemonilia/limarp-llama2-v2).
|
|
|
5 |
# LimaRP-Llama2-7B-v3 (Alpaca, experimental, 4-bit LoRA adapter)
|
6 |
|
7 |
This is an experimental version of LimaRP for Llama2 with an updated dataset (1800 training samples)
|
8 |
+
and a 2-pass training procedure. The first pass includes unsupervised finetuning on 2800 stories within
|
9 |
4k tokens length and the second pass is LimaRP with changes introducing more effective control on response length.
|
10 |
|
11 |
For more details about LimaRP, see the model page for the [previously released version](https://huggingface.co/lemonilia/limarp-llama2-v2).
|