Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,7 @@ pipeline_tag: text-generation
|
|
11 |
base_model: mistralai/Mistral-7B-v0.1
|
12 |
---
|
13 |
|
|
|
14 |
# mistral-7b-instruct-peft
|
15 |
|
16 |
This instruction model was built via parameter-efficient QLoRA finetuning of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k rows of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 1x A100 (40 GB SXM) for roughly 2 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
|
|
11 |
base_model: mistralai/Mistral-7B-v0.1
|
12 |
---
|
13 |
|
14 |
+
|
15 |
# mistral-7b-instruct-peft
|
16 |
|
17 |
This instruction model was built via parameter-efficient QLoRA finetuning of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k rows of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 1x A100 (40 GB SXM) for roughly 2 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|