Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,13 @@ pipeline_tag: text-generation
|
|
| 11 |
base_model: meta-llama/Llama-2-7b-hf
|
| 12 |
---
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
This instruction model was built via parameter-efficient QLoRA finetuning of [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k rows of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 1x A100 (40 GB SXM) for roughly 2 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
| 17 |
|
|
@@ -72,7 +78,7 @@ notebook_login()
|
|
| 72 |
```
|
| 73 |
|
| 74 |
```python
|
| 75 |
-
peft_model_id = "dfurman/
|
| 76 |
config = PeftConfig.from_pretrained(peft_model_id)
|
| 77 |
|
| 78 |
bnb_config = BitsAndBytesConfig(
|
|
|
|
| 11 |
base_model: meta-llama/Llama-2-7b-hf
|
| 12 |
---
|
| 13 |
|
| 14 |
+
<div align="center">
|
| 15 |
+
|
| 16 |
+
<img src="./assets/llama.png" width="150px">
|
| 17 |
+
|
| 18 |
+
</div>
|
| 19 |
+
|
| 20 |
+
# Llama-2-7B-Instruct-v0.1
|
| 21 |
|
| 22 |
This instruction model was built via parameter-efficient QLoRA finetuning of [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k rows of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 1x A100 (40 GB SXM) for roughly 2 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
| 23 |
|
|
|
|
| 78 |
```
|
| 79 |
|
| 80 |
```python
|
| 81 |
+
peft_model_id = "dfurman/Llama-2-7B-Instruct-v0.1"
|
| 82 |
config = PeftConfig.from_pretrained(peft_model_id)
|
| 83 |
|
| 84 |
bnb_config = BitsAndBytesConfig(
|