library_name: transformers | |
license: mit | |
datasets: | |
- argilla/ultrafeedback-binarized-preferences-cleaned | |
base_model: | |
- microsoft/Phi-3-mini-4k-instruct | |
This is the bandit reward based ppo model introduced in the preprint **Segmenting Text and Learning Their Rewards for Improved RLHF in Language Models** (https://arxiv.org/abs/2501.02790). For more details, please visit our repository at https://github.com/yinyueqin/DenseRewardRLHF-PPO. |