| library_name: peft | |
| tags: | |
| - gptj-6b | |
| - instruct | |
| - instruct-alpaca | |
| - alpaca | |
| - gpt4 | |
| datasets: | |
| - vicgalle/alpaca-gpt4 | |
| base_model: EleutherAI/gpt-j-6b | |
| We finetuned gptj-6b on Code-Alpaca-Instruct Dataset (vicgalle/alpaca-gpt4) for 10 epochs or ~ 50,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm). | |
| This dataset is vicgalle/alpaca-gpt4 unfiltered, | |
| The finetuning session got completed in 7 hours and costed us only `$25` for the entire finetuning run! | |
| #### Hyperparameters & Run details: | |
| - Model Path: vicgalle/alpaca-gpt4 | |
| - Dataset: vicgalle/alpaca-gpt4 | |
| - Learning rate: 0.0003 | |
| - Number of epochs: 5 | |
| - Data split: Training: 90% / Validation: 10% | |
| - Gradient accumulation steps: 1 | |
| --- | |
| license: apache-2.0 | |
| --- |