dfurman commited on
Commit
72af688
·
1 Parent(s): f63b23e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -20,7 +20,7 @@ General instruction-following llm finetuned from [mistralai/Mistral-7B-v0.1](htt
20
 
21
  ### Model Description
22
 
23
- This instruction-following llm was built via parameter-efficient QLoRA finetuning of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin). Finetuning was executed on 1x A100 (40 GB SXM) for roughly 1 hour on Google Colab. **Only** the `peft` adapter weights are included in this model repo, alonside the tokenizer.
24
 
25
  - **Developed by:** Daniel Furman
26
  - **Model type:** Decoder-only
 
20
 
21
  ### Model Description
22
 
23
+ This instruction-following llm was built via parameter-efficient QLoRA finetuning of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin). Finetuning was executed on 1x A100 (40 GB SXM) for roughly 1 hour on Google Colab.
24
 
25
  - **Developed by:** Daniel Furman
26
  - **Model type:** Decoder-only