Safetensors
gemma

Model Card for MA-RLHF

ICLR 2025 Github

This repository contains the official checkpoint for Reinforcement Learning From Human Feedback with Macro Actions (MA-RLHF).

Model Description

MA-RLHF is a novel framework that integrates macro actions into conventional RLHF. The macro actions are sequences of tokens or higher-level language constructs, with can be computed through different defined termination conditions, like n-gram based, perplexity-based, or parsing-based termination conditions. By introducing macro actions into RLHF, we reduce the number of decision points and shorten decision trajectories, alleviating the credit assignment problem caused by long temporal distances.

Model Checkpoint Base Model Dataset
TLDR-Gemma-2B-MA-PPO-Fixed5 🤗 HF Link google/gemma-2b openai/summarize_from_feedback
TLDR-Gemma-7B-MA-PPO-Fixed5 🤗 HF Link google/gemma-7b openai/summarize_from_feedback
TLDR-Gemma-2-27B-MA-PPO-Fixed5 🤗 HF Link google/gemma-2-27b openai/summarize_from_feedback
HH-RLHF-Gemma-2B-MA-PPO-Fixed5 🤗 HF Link google/gemma-2b Dahoas/full-hh-rlhf
HH-RLHF-Gemma-7B-MA-PPO-Fixed5 🤗 HF Link google/gemma-7b Dahoas/full-hh-rlhf
APPS-Gemma-2B-MA-PPO-Fixed10 🤗 HF Link google/codegemma-2b codeparrot/apps
APPS-Gemma-7B-MA-PPO-Fixed10 🤗 HF Link google/codegemma-7b-it codeparrot/apps

Model Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "baidu/TLDR-Gemma-7B-MA-PPO-Fixed5"

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype='auto', trust_remote_code=True)

input_text = """
POST Subreddit: r/cats
Hello everyone! One of my cats is about 10 years old now, she is pretty much strictly
indoors save for some time she spends on our screened in porch each day. (She likes
to watch the birds in the yard while she suns herself by the pool, quite the princess).
Anyway, when she was younger she was very active and quite small, however with
age she has put on a pretty hefty amount of weight. I feed her indoor cat food
for weight control, I’ve switched brands a few times trying to find something that
works, I’ve cut back on feeding her by a lot (she gets very angry and demanding
when she wants food but I don’t give in) however, nothing really seems to work.
I’ve tried cat toys, and bought a harness thinking I could try to walk her but she just
lays down and looks at me like I’m stupid. Basically I just want to know if you all
have any suggestions for exercise or food. I care about her and don’t want this to
get any worse. I also have another cat that eats the same amount and type of food
as her and is a completely normal weight and only a year younger, however he is a
male, not sure if that makes a difference in predisposition for weight gain. They are
also both fixed. TL;DR: 
"""

input_ids = tokenizer(input_text, return_tensors='pt').to(model.device)
output_ids = model.generate(**input_ids, max_new_tokens=20)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)

print(response)

Citation

@inproceedings{
  chai2025marlhf,
  title={{MA}-{RLHF}: Reinforcement Learning from Human Feedback with Macro Actions},
  author={Yekun Chai and Haoran Sun and Huang Fang and Shuohuan Wang and Yu Sun and Hua Wu},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025},
  url={https://openreview.net/forum?id=WWXjMYZxfH}
}
Downloads last month
2
Safetensors
Model size
8.54B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for baidu/TLDR-Gemma-7B-MA-PPO-Fixed5

Base model

google/gemma-7b
Finetuned
(209)
this model
Quantizations
1 model

Dataset used to train baidu/TLDR-Gemma-7B-MA-PPO-Fixed5