File size: 1,232 Bytes
f6e8063 d18a476 1b098de d18a476 1b098de d18a476 f6e8063 d18a476 f6e8063 1b098de d18a476 f6e8063 d18a476 1b098de d18a476 1b098de d18a476 1b098de d18a476 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
library_name: peft
tags:
- meta-llama
- code
- instruct
- WizardLM
datasets:
- WizardLM/WizardLM_evol_instruct_70k
base_model: meta-llama/Llama-2-7b-hf
license: apache-2.0
---
### Finetuning Overview:
**Model Used:** meta-llama/Llama-2-7b-hf
**Dataset:** WizardLM/WizardLM_evol_instruct_70k
#### Dataset Insights:
The WizardLM/WizardLM_evol_instruct_70k dataset, tailored specifically for enhancing interactive capabilities, was developed using the EVOL-Instruct method. This method enhances a smaller dataset with tougher questions for the LLM to perform.
#### Finetuning Details:
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning:
- Was achieved with great cost-effectiveness.
- Completed in a total duration of 6hrs 48mins for 1 epoch.
#### Hyperparameters & Additional Details:
- **Epochs:** 1
- **Model Path:** meta-llama/Llama-2-7b-hf
- **Learning Rate:** 0.0002
- **Data Split:** 90% train 10% validation
- **Gradient Accumulation Steps:** 4
```
### INSTRUCTION:
[instruction]
### RESPONSE:
[output]
```
Training loss :

---
license: apache-2.0
|