Update README.md
Browse files
README.md
CHANGED
@@ -4,9 +4,9 @@ license: apache-2.0
|
|
4 |
|
5 |
I know, long name. This model was created as an experiment on using LoRA extraction to replicate Openchat-3.5-0106 using Mistral-7B-v0.2 as a base model instead of the original Mistral-7B-v0.1.
|
6 |
|
7 |
-
OpenChat-3.5-0106 is an excellent model but was based on Mistral-7B-v0.1 which has a context window of 8192 tokens. Mistral-7B-v0.2 has a context window of 32768 tokens. I could have extended OpenChat-3.5 context myself with RoPE and/or YaRN but that has been done. There are many models on HF that have done exactly that. Instead I decided to try and replicate OpenChat-3.5-0106 using the LoRA extraction available in mergekit. These are the steps I followed:
|
8 |
-
- Extract a LoRA with rank 512 from OpenChat-3.5-0106 using imone's Mistral_7B_with_EOT_token as base.
|
9 |
- Replicate imone's work by adding the EOT token to Mistral-7B-v0.2, creating Mistral-7B-v0.2_EOT.
|
10 |
-
- Merge the LoRA's weights to
|
11 |
|
12 |
This is the result. This model is not meant for use, it was created to test if this method is viable for replacing the base model of fine-tuned models. I am uploading here for evaluation. I don't expect this model to match the original OpenChat-3.5-0106 since I used a LoRA with rank 512, so it won't be equivalent to a full fine-tuning.
|
|
|
4 |
|
5 |
I know, long name. This model was created as an experiment on using LoRA extraction to replicate Openchat-3.5-0106 using Mistral-7B-v0.2 as a base model instead of the original Mistral-7B-v0.1.
|
6 |
|
7 |
+
OpenChat-3.5-0106 is an excellent model but was based on Mistral-7B-v0.1 which has a context window of 8192 tokens. Mistral-7B-v0.2 has a context window of 32768 tokens. I could have extended OpenChat-3.5 context myself with RoPE and/or YaRN but that has been done. There are many models on HF that have done exactly that. Instead I decided to try and replicate OpenChat-3.5-0106 using the LoRA extraction method available in mergekit. These are the steps I followed:
|
8 |
+
- Extract a LoRA with rank 512 from OpenChat-3.5-0106 using imone's Mistral_7B_with_EOT_token as the base model.
|
9 |
- Replicate imone's work by adding the EOT token to Mistral-7B-v0.2, creating Mistral-7B-v0.2_EOT.
|
10 |
+
- Merge the LoRA's weights to the Mistral-7B-v0.2_EOT model.
|
11 |
|
12 |
This is the result. This model is not meant for use, it was created to test if this method is viable for replacing the base model of fine-tuned models. I am uploading here for evaluation. I don't expect this model to match the original OpenChat-3.5-0106 since I used a LoRA with rank 512, so it won't be equivalent to a full fine-tuning.
|