The notebooks and scripts in this examples show how to fine-tune a model with a sentiment classifier (such as lvwerra/distilbert-imdb).
Here’s an overview of the notebooks and scripts in the trl repository:
| File | Description | Colab link |
|---|---|---|
gpt2-sentiment_peft.py |
Same as the sentiment analysis example, but learning a low rank adapter on a 8-bit base model |
pip install trl[peft]
pip install bitsandbytes loralib
pip install git+https://github.com/huggingface/transformers.git@main
#optional: wandb
pip install wandbNote: if you don’t want to log with wandb remove log_with="wandb" in the scripts/notebooks. You can also replace it with your favourite experiment tracker that’s supported by accelerate.
The trl library is powered by accelerate. As such it is best to configure and launch trainings with the following commands:
accelerate config # will prompt you to define the training configuration
accelerate launch scripts/gpt2-sentiment_peft.py # launches training