| library_name: peft | |
| tags: | |
| - gpt2 | |
| - code | |
| - instruct | |
| - alpaca-instruct | |
| - alpaca | |
| datasets: | |
| - tatsu-lab/alpaca | |
| base_model: gpt2 | |
| We finetuned gpt2 on tatsu-lab/alpaca Dataset for 5 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm). | |
| This dataset is HuggingFaceH4/tatsu-lab/alpaca unfiltered, removing 36 instances of blatant alignment. | |
| The finetuning session got completed in 20 minutes and costed us only `$3` for the entire finetuning run! | |
| #### Hyperparameters & Run details: | |
| - Model: gpt2 | |
| - Dataset: tatsu-lab/alpaca | |
| - Learning rate: 0.0003 | |
| - Number of epochs: 5 | |
| - Data split: Training: 90% / Validation: 10% | |
| - Gradient accumulation steps: 1 | |
| --- | |
| license: apache-2.0 | |
| --- | |