File size: 537 Bytes
04967d0
 
2ab30ac
 
fa7256a
 
eef8a7f
 
 
 
b767c03
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
---
license: mit
language:
- en
---

## Instruction-tuned LLaMA (Alpaca-GPT4)

Fine-tune [LLaMA-7B](https://huggingface.co/decapoda-research/llama-7b-hf) on the alpaca dataset.

The main training scripts are from [stanford-alpaca repo](https://github.com/tatsu-lab/stanford_alpaca), while the data is from [GPT-4-LLM repo](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM#data-release), with the default training hyper-parameters.

Please refer to [this page](https://instruction-tuning-with-gpt-4.github.io/) for more details.