Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ pipeline_tag: text-generation
|
|
10 |
base_model: meta-llama/Llama-2-13b-hf
|
11 |
---
|
12 |
|
13 |
-
# llama-2-13b-
|
14 |
|
15 |
This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the first 100k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single A6000 (48 GB) for roughly 18 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
16 |
|
@@ -30,9 +30,9 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
|
|
30 |
|
31 |
* Model license: Llama 2 Community License Agreement
|
32 |
* Basic usage: [notebook](assets/basic_inference_llama_2_13b_dolphin.ipynb)
|
33 |
-
* Finetuning script: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-llama-2-13b-
|
34 |
-
* Loss curves: [plot](https://huggingface.co/dfurman/llama-2-13b-
|
35 |
-
* Runtime stats: [table](https://huggingface.co/dfurman/llama-2-13b-
|
36 |
|
37 |
## Example prompts and responses
|
38 |
|
@@ -41,7 +41,7 @@ Example 1:
|
|
41 |
**User**:
|
42 |
>You are a helpful assistant. Write me a numbered list of things to do in New York City.\n
|
43 |
|
44 |
-
**llama-2-13b-
|
45 |
|
46 |
> 1. Visit the iconic Statue of Liberty and Ellis Island.
|
47 |
> 2. Take a stroll through Central Park and enjoy its many attractions.
|
@@ -61,7 +61,7 @@ Example 2:
|
|
61 |
**User**:
|
62 |
>You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n"
|
63 |
|
64 |
-
**llama-2-13b-
|
65 |
|
66 |
> Subject: Friday Dinner Party Invitation
|
67 |
>
|
@@ -97,7 +97,7 @@ The llama-2-13b models have been modified from a standard transformer in the fol
|
|
97 |
|
98 |
This model was trained on a single A6000 (48 GB) for about 18 hours using the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
99 |
|
100 |
-

|
|
138 |
```
|
139 |
|
140 |
```python
|
141 |
-
peft_model_id = "dfurman/llama-2-13b-
|
142 |
config = PeftConfig.from_pretrained(peft_model_id)
|
143 |
|
144 |
bnb_config = BitsAndBytesConfig(
|
@@ -194,7 +194,7 @@ print(tokenizer.decode(output["sequences"][0], skip_special_tokens=True))
|
|
194 |
| 2.93 | 1x A100 (40 GB SXM) | torch | bfloat16 | 25 |
|
195 |
| 3.24 | 1x A6000 (48 GB) | torch | bfloat16 | 25 |
|
196 |
|
197 |
-
The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/postprocessing-llama-2-13b-
|
198 |
|
199 |
## Acknowledgements
|
200 |
|
|
|
10 |
base_model: meta-llama/Llama-2-13b-hf
|
11 |
---
|
12 |
|
13 |
+
# llama-2-13b-instruct-v0.1 🦙🐬
|
14 |
|
15 |
This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the first 100k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single A6000 (48 GB) for roughly 18 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
16 |
|
|
|
30 |
|
31 |
* Model license: Llama 2 Community License Agreement
|
32 |
* Basic usage: [notebook](assets/basic_inference_llama_2_13b_dolphin.ipynb)
|
33 |
+
* Finetuning script: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-llama-2-13b-instruct-v0.1-peft.py)
|
34 |
+
* Loss curves: [plot](https://huggingface.co/dfurman/llama-2-13b-instruct-v0.1-peft#finetuning-description)
|
35 |
+
* Runtime stats: [table](https://huggingface.co/dfurman/llama-2-13b-instruct-v0.1-peft#runtime-tests)
|
36 |
|
37 |
## Example prompts and responses
|
38 |
|
|
|
41 |
**User**:
|
42 |
>You are a helpful assistant. Write me a numbered list of things to do in New York City.\n
|
43 |
|
44 |
+
**llama-2-13b-instruct-v0.1-peft**:
|
45 |
|
46 |
> 1. Visit the iconic Statue of Liberty and Ellis Island.
|
47 |
> 2. Take a stroll through Central Park and enjoy its many attractions.
|
|
|
61 |
**User**:
|
62 |
>You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n"
|
63 |
|
64 |
+
**llama-2-13b-instruct-v0.1-peft**:
|
65 |
|
66 |
> Subject: Friday Dinner Party Invitation
|
67 |
>
|
|
|
97 |
|
98 |
This model was trained on a single A6000 (48 GB) for about 18 hours using the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
99 |
|
100 |
+

|
101 |
|
102 |
The above loss curve was generated from the run's private wandb.ai log.
|
103 |
|
|
|
138 |
```
|
139 |
|
140 |
```python
|
141 |
+
peft_model_id = "dfurman/llama-2-13b-instruct-v0.1-peft"
|
142 |
config = PeftConfig.from_pretrained(peft_model_id)
|
143 |
|
144 |
bnb_config = BitsAndBytesConfig(
|
|
|
194 |
| 2.93 | 1x A100 (40 GB SXM) | torch | bfloat16 | 25 |
|
195 |
| 3.24 | 1x A6000 (48 GB) | torch | bfloat16 | 25 |
|
196 |
|
197 |
+
The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/postprocessing-llama-2-13b-instruct-v0.1-peft.ipynb).
|
198 |
|
199 |
## Acknowledgements
|
200 |
|