doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.06825 | 28 | [26] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[27] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[28] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
[29] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364, 2023.
9 | 2310.06825#28 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.05910 | 1 | Supervised Fine-Tuning (SFT) on response demonstrations combined with Re- inforcement Learning from Human Feedback (RLHF) constitutes a powerful paradigm for aligning LLM-based AI agents. However, a significant limitation of such an approach is its dependency on high-quality human annotations, making its application to intricate tasks challenging due to difficulties in obtaining consis- tent response demonstrations and in-distribution response preferences. This paper presents a novel approach, namely SALMON (Self-ALignMent with principle- fOllowiNg reward models), to align base language models with minimal human supervision, using only a small set of human-defined principles, yet achieving superior performance. Central to our approach is a principle-following reward model. Trained on synthetic preference data, this model can generate reward scores based on arbitrary human-defined principles. By merely adjusting these principles during the RL training phase, we gain full control over the prefer- ences with the reward model, subsequently influencing the behavior of the RL- trained policies, and eliminating the reliance on the collection of online human preferences. Applying our method to the | 2310.05910#1 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 2 | subsequently influencing the behavior of the RL- trained policies, and eliminating the reliance on the collection of online human preferences. Applying our method to the LLaMA-2-70b base language model, we developed an AI assistant named Dromedary-2. With only 6 exemplars for in-context learning and 31 human-defined principles, Dromedary-2 signifi- cantly surpasses the performance of several state-of-the-art AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have open-sourced the code and model weights to encourage further research into aligning LLM- based AI agents with enhanced supervision efficiency, improved controllability, and scalable oversight. | 2310.05910#2 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 3 | 1
# INTRODUCTION
The prevailing AI alignment paradigm, exemplified in models like ChatGPT (OpenAI, 2022) and LLaMA-2-Chat (Touvron et al., 2023b), employs supervised fine-tuning (SFT) with prompted demonstrations (Sanh et al., 2021; Chung et al., 2022a; Zhou et al., 2023) and reinforcement learn- ing from human feedback (RLHF) to align the outputs of large language models (LLMs) with human intentions (Ziegler et al., 2019; Ouyang et al., 2022). However, acquiring high-quality human anno- tations, including consistent response demonstrations and in-distribution preferences, is costly and not scalable (Touvron et al., 2023b). Furthermore, the existing paradigm of SFT + RLHF is inher- ently limited in assuming that humans can always demonstrate or evaluate the tasks undertaken by advanced AI systems. Although todayâs models fall within human evaluative boundaries, future, more advanced models could embark on tasks that challenge human evaluation. Consequently, there
âCorrespondence: [email protected]. Work done during internship at MIT-IBM Watson AI Lab.
1
# Preprint | 2310.05910#3 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 4 | âCorrespondence: [email protected]. Work done during internship at MIT-IBM Watson AI Lab.
1
# Preprint
Table 1: Comparison of human supervisions used in recent AI systems and their MT-Bench scores (Zheng et al., 2023). We exclude models that used any Knowledge Distillation (KD) data. The alignment techniques used in previous work include SFT (Supervised Fine-tuning), RLHF (Rein- forcement Learning from Human Feedback), and CAI (Constitutional AI). Information is from: a OpenAI (2023b), b Bai et al. (2022b); Anthropic (2023), c OpenAI (2022), d OpenAI (2023a). | 2310.05910#4 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 5 | # Demonstration Annotations # Preference Annotations MT-Bench Score Alignment Techniques (closed-source models) InstructGPT-SFT (175b) InstructGPT (175b) Text-Davinci-003 (175b) Claude-V1 (?) ChatGPT (?) GPT-4 (?) 12,725 12,725 ? ? ? ? 0 33,207 ? ? ? ? 2.7 ? 6.4 7.9 7.9 9.0 SFT a SFT & RLHF a SFT & RLHF a RLHF & CAI b SFT & RLHF c SFT & RLHF & CAI d (non-distilled open-source models) Dolly-V2 (12b) Guanaco (65b) OpenAssistant-SFT (30b) OpenAssistant (30b) LLaMA-2-Chat (70b) Dromedary-2 (70b) 15,000 9,846 69,614 69,614 27,540 6 0 0 0 39,670 1,418,091 0 2.0 6.4 6.4 6.6 6.9 7.4
is a looming danger, i.e., such models may value appeasing human evaluators over ensuring accuracy (Andreas, 2022; Perez et al., 2022). | 2310.05910#5 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 6 | To address the current challenges in AI alignment, we aim to develop a new methodology that fa- cilitates scalable oversight (Amodei et al., 2016; Bowman et al., 2022). Our vision is to define a few general principles, akin to Issac Asimovâs three laws in robotics (Asimov, 1941), which are comprehensively interalizable for AI systems to follow (Gilardi et al., 2023; Ganguli et al., 2023). This goal is in line with the recent research on self-alignment (Bai et al., 2022b; Sun et al., 2023b), where the primary focus is to use AI models to improve themselves, e.g., with bootstrapping over the model-generated critiques (Madaan et al., 2023; Fu et al., 2023) or self-refined outputs (Wang et al., 2022a; Li et al., 2023a). However, it is worth noting that these bootstrapping methods still lag behind the RLHF method in performance (Bai et al., 2022b; Touvron et al., 2023b). Meanwhile, methods like Reinforcement Learning from AI Feedback (RLAIF) or Constitutional AI (CAI) (Bai et al., 2022b; OpenAI, | 2310.05910#6 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 7 | Meanwhile, methods like Reinforcement Learning from AI Feedback (RLAIF) or Constitutional AI (CAI) (Bai et al., 2022b; OpenAI, 2023a) has emerged as an alternative potential. These techniques leverage feedback from automated AI systems, reducing the reliance on exhaustive human-annotated prefer- ences. So far, the primary focus of the previous RLAIF work remains on enhancing the safety of the models that have already undergone RLHF training. That is, these RLAIF methods inherit the heavy dependency on the human-annotated preferences in the RLHF warm-up stage. This leads to a pivotal research question: | 2310.05910#7 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 8 | ⢠Can RLAIF fully replace RLHF to align language models from scratch in enhancing their general alignment and capabilities?
This paper provides a definitive confirmation for the above question by introducing a novel approach namely SALMON. At the heart of our approach lies the introduction of the principle-following (also termed instruction-following) reward model. Pioneering in its nature, this reward model is adept at interpreting and adhering to arbitrary human-written preference guidelines, subsequently generating human-guided reward scores. This is different from previous RLAIF methods (Bai et al., 2022b; OpenAI, 2023a) where the principles are only used to produce synthetic preferences, and the resulting reward models generate scores without any specific principles, as illustrated in Figure 1.
The design of our principle-following reward model enables better control over the behavior of the final RL-trained policy model. Within conventional RLHF paradigms, the iterative collection of online (in-distribution) preference data (Bai et al., 2022a; Touvron et al., 2023b) is essential to counteract reward hacking (Pan et al., 2022). This complication emerges when the policy model exploits weaknesses in the reward model, producing inflated scores that do not accurately reflect model performance. In SALMON, we can address this issue by simply crafting principles explicitly
2
Preprint | 2310.05910#8 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 9 | 2
Preprint
RLHF (Ouyang et al., 2022) human-Labeled preferences Stand-alone reward model f~. RN-RLME Pronpt + Response General =| ao & â_e Mii, â Pome Hunan Annotator Sampled prompts at RLAIF (Bai et al., 2022) weite a story about dromedazies Al-labeled preferences Stand-alone reward model SFT RM-RLAIF Safety Prompt + Response m= pe Alignment SFT-generated responses Reward Score ser Principles SALMON (Ours) Al-labeled preferences Principle-following reward model I 1, SFT denotes th on re SON Trorot ¢ Response General n general, jenotes the = Supervised Fine-Tuned model, but it Principles Alignment can also be RLHF-trained in RLAIF. Principles Renard Score Principle Aggregating
Figure 1: Comparison among RLHF (Ouyang et al., 2022), RLAIF (Bai et al., 2022b), and SALMON (Ours). The vanilla (stand-alone) reward models in RLHF & RLAIF are trained to give high scores to generally good responses, while the principle-following reward model in SALMON is trained to generate reward scores based on customized principles as the preference guideline. | 2310.05910#9 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 10 | designed to combat observed1 reward hacking patterns in model outputs, such as self-praising at the end of the response. Additionally, we found that we are able to emphasize distinct aspects of the alignment in the HHH (helpful, honest, and harmless) alignment framework (Askell et al., 2021) by customizing the preference principles. Our methodology also proved effective in reducing the occurrence of false refusals seen in certain over-aligned language models (Touvron et al., 2023b) by crafting special principles. | 2310.05910#10 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 11 | Our principle-following reward model can be trained with synthetic data and seamlessly applied to a diverse range of language models without collecting any model-specific human preference data (Bai et al., 2022a; Touvron et al., 2023b). Possible policy model initialization strategies include principle-driven self-alignment (Sun et al., 2023b), supervised fine-tuning on human demonstrations (Chung et al., 2022a; Zhou et al., 2023), or even those unaligned base language models (Touvron et al., 2023a). Remarkably, when integrated with the SELF-ALIGN technique (Sun et al., 2023b), our method enabled the training of a self-aligned AI-assistant agent, namely Dromedary-2, from scratch by only manually crafting 6 exemplars for In-Context Learning (Brown et al., 2020) and a combined total of 31 principles (17 from SELF-ALIGN and 14 for SALMON). Despite its mini- mal human supervision design, our model outperformed the extensively RLHF-trained LLaMA-2- Chat model (Touvron et al., 2023b), which was trained with over 20,000+ human-curated response | 2310.05910#11 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 13 | # 2 RELATED WORK
AI Alignment from Scratch The problem of aligning AIs (Gabriel, 2020), especially large lan- guage models (LLMs), to human values and intentions in terms of being helpful, honest, and harm- less (Christiano et al., 2017; Patil et al., 2020; Askell et al., 2021; Ouyang et al., 2022; Bai et al., 2022a;b; OpenAI, 2023a) has gained significant attention as recent AI systems have rapidly ad1In this paper, we write language descriptions of the reward-hacking patterns observed through humanâs manual inspection. Future work may consider a more systematic and automated approach (Bills et al., 2023; Zhong et al., 2023) for summarizing the language descriptions of the reward hacking patterns.
3
# Preprint | 2310.05910#13 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 14 | 3
# Preprint
vanced in their capabilities (Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020; Chowdhery et al., 2022). This work focuses on the problem of aligning LLMs from scratch, that is, we aim to develop a new methodology capable of aligning a pre-trained base language model without relying on pre-existing, well-aligned models like ChatGPT (OpenAI, 2022) or GPT-4 (OpenAI, 2023a). This direction markedly differentiates our work from contemporary research primarily focused on distilling capabilities or aligned behaviors from proprietary models into smaller open-source models (Taori et al., 2023; Chiang et al., 2023), which has notable drawbacks (Gudibande et al., 2023). | 2310.05910#14 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 15 | Scalable Oversight & Self-Alignment AI alignment traditionally relies heavily on extensive human annotations. Primary Supervised Fine-Tuning (SFT) sources for response demonstrations include those curated from existing NLP datasets (Sanh et al., 2021; Wei et al., 2021; Chung et al., 2022b; Wang et al., 2022b) and those specifically crafted by humans for instruction tuning (Databricks, 2023; K¨opf et al., 2023; Zhou et al., 2023; Ouyang et al., 2022). In the recent trend of aligning language models with Reinforcement Learning from Human Feedback (RLHF; Christiano et al. (2017); Stiennon et al. (2020); Ouyang et al. (2022); Bai et al. (2022a); Touvron et al. (2023b)), online human preferences are collected to train a reward model to further fine-tune the SFT-trained model (Leike et al., 2018). However, acquiring high-quality human annotations, including consis- tent response demonstrations and in-distribution preferences, has emerged as a significant bottle- neck. This limitation hampers the full potential | 2310.05910#15 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 16 | including consis- tent response demonstrations and in-distribution preferences, has emerged as a significant bottle- neck. This limitation hampers the full potential of AI-assistant agents because human oversight in the current formats of demonstration or preference may not be generalizable to more complex tasks. Additionally, even for relatively simpler tasks, obtaining human annotations could be costly and raises concerns about quality, reliability, diversity, creativity, self-consistency, and the potential for undesirable biases (Wang et al., 2022a; K¨opf et al., 2023; Wan et al., 2023). | 2310.05910#16 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 17 | To address the above challenges, we need to develop a new paradigm to support âself-alignmentâ in AI systems that can facilitate scalable oversight (Nakano et al., 2021; Bowman et al., 2022). A few notable self-alignment techniques involve bootstrapping by fine-tuning on model-generated synthetic data. For instance, Self-Instruct (Wang et al., 2022a) bootstraps a base language model with its own generations conditional on 175 In-Context Learning (ICL) query-response pairs. Self- Align (Sun et al., 2023b) removes the need for response demonstrations and uses 16 principles and 5 ICL exemplars to guide the AI in generating appropriate responses. Instruction Back-translation (Li et al., 2023a) uses web documents to create new training examples for an SFT model trained on 3200 seed examples. But the efficacy of such bootstrapping strategies in outperforming the established RLHF paradigm remains an open question (Bai et al., 2022b; Touvron et al., 2023b). | 2310.05910#17 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 18 | Reinforcement Learning from AI Feedback (RLAIF) Another line of self-alignment research seeks to fine-tune LLMs using a reward model trained on the AIâs own evaluations (Bai et al., 2022b; OpenAI, 2023a) or a stronger LLM as the oracle evaluator (Dubois et al., 2023). In particular, Con- stitutional AI (CAI) (Bai et al., 2022b; OpenAI, 2023a) delves into self-enhancement for alleviating harmful outputs, without relying on human annotations. This is achieved through AI-generated self-critiques, revisions, and preference models. Guided by a set of human-written principles, this method aims to make AI systems more safe. In contrast, we mainly focus on improving the general alignment and capabilities of AI systems in this paper, rather than a special emphasis on safety. | 2310.05910#18 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 19 | Additionally, our work draws parallels with techniques that train language models with reinforce- ment learning by pre-defined synthetic preference, as seen in approaches like ALMoST (Kim et al., 2023) and RLCD (Yang et al., 2023). ALMoST assumes that larger models with more few-shot ex- emplars tend to generate better responses, while RLCD assumes that positively prompted responses are generally better than negatively prompted responses. Contrarily, RLAIF methods, including CAI and SALMON, do not have preconceived preferences and instead let AI systems make choices after reviewing and comparing the response pairs.
3 OUR METHODOLOGY
3.1 PREREQUISITES
Reinforcement Learning (RL) with preference modeling (Ziegler et al., 2019; Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a) has emerged as a potent and scalable strategy for aligning Large Language Models (LLM) with human values. It can be summarized into two stages:
4
Preprint | 2310.05910#19 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 20 | 4
Preprint
Preference Modeling In this stage, a reward model, alternatively referred to as a preference model, is trained to give a higher score to the âbetterâ response. The source of pairwise compari- son training data varies: it can be annotated by human annotators (Ouyang et al., 2022; Bai et al., 2022a), by existing AI systems (Bai et al., 2022b; OpenAI, 2023a), or pre-fixed with heuristics (Kim et al., 2023; Yang et al., 2023). Formally, let the aggregated preference data be represented as DRM = {(x, y0, y1, i)}, where x denotes the prompt, y0 and y1 are two associated responses, and i indicates the index of the preferred response. The reward model employs a cross-entropy loss function:
L(rθ) = âE(x,y0,y1,i)â¼DRM [log Ï(rθ(x, yi) â rθ(x, y1âi))] . (1) | 2310.05910#20 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 21 | Reinforcement Learning Here, a policy model is trained to generate an appropriate response for each user query by maximizing the reward signal as provided by the reward model. Initialization of the policy model can be accomplished using a pre-trained base language model (BASE) (Bai et al., 2022b), context distillation (CD) (Bai et al., 2022a; Sun et al., 2023b), or through supervised fine- tuning (SFT) (Ouyang et al., 2022; Touvron et al., 2023b). To address potential over-optimization challenges, notably reward hacking, a per-token KL penalty derived from the initial policy model (Ouyang et al., 2022) is sometimes applied. Formally, given the set of collected user prompts, DRL = {x}, along with the fixed initial policy model ÏINIT and the RL-optimized model ÏRL Ï , the full optimization loss is articulated as:
Le") = âExedany~th(yle) [ro(e.y) â B- Dex where (3 is the hyper-parameter to control the scale of the KL penalty.
Le") = âExedany~th(yle) [ro(e.y) â B- Dex (me"(yla)imâ¢*(yl2))],
3.2 PRINCIPLE-DRIVEN PREFERENCE MODELING | 2310.05910#21 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 22 | 3.2 PRINCIPLE-DRIVEN PREFERENCE MODELING
A significant challenge within the current RLHF paradigm is the necessity to iteratively gather âfreshâ human preferences, aimed at countering reward hacking. Specifically, there is a risk that the RL-optimized model ÏRL Ï might exploit certain vulnerabilities in the fixed reward model, thereby artificially boosting its score without genuine performance improvement (Gao et al., 2023). For example, Bai et al. (2022a) revealed that both the reward model and RLHF policies require weekly updates. Similarly, Touvron et al. (2023b) documented the weekly collection of human prefer- ences over five iterations, emphasizing that this frequency ensures the reward model remains in- distribution. Consequently, the RLHF paradigm becomes highly reliant on human annotation, un- dermining its scalability for language model alignment, and limiting the utilization of pre-existing open-source preference pre-training data (Bai et al., 2022a). In this paper, we propose a novel Re- inforcement Learning with AI Feedback (RLAIF) paradigm, where the AI system is used to label preferences in a scalable manner, and a principle-following reward model is trained to address the issue of reward hacking. | 2310.05910#22 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 24 | After encoding the preference prompt, we calculate the log probability for the next token to be responses (A) or (B), subsequently determining a preference label based on their comparison. No- tably, our methodology diverges from prior RLAIF approaches (Bai et al., 2022b; OpenAI, 2023a) that focus on AI safety when defining principles: In addition to harmlessness principles, we also set forth principles emphasizing honesty and helpfulness of the responses. Therefore, we do not need an RLHF-trained model as the initial policy model, as our policy model can learn to be more helpful when guided by these helpfulness principles. We illustrate the full list of the principles used for syn- thetic preference modeling in Table 6. For each user prompt and each principle, the preference score is computed as the difference between the log probabilities of choosing responses (A) or (B). To account for potential position biases (Pezeshkpour & Hruschka, 2023) during the language modelâs multi-choice decision-making, scores are averaged after undergoing a swapping operation.
Training Principle-Following Reward Models We aim to train an instruction-following reward model, which can comprehend and assign reward scores contingent upon arbitrary human-defined
5
Preprint | 2310.05910#24 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 25 | Q user [Prompt] Who is the president of us in 2025? = SFT-Model (Output A) [Sampled Outpuf] As an Al language model, my knowledge only goes up until September 2021, so | cannot predict the president of the United States in 2025. - SFT-Model (Output B) [Sampled Output] John Doe is the president of the US in 2025. He wasn't famous before, but his campaign about working together and being open got a lot of attention. SFT-Model (Judge) [Guidelines] You are a helpful assistant whose goal is to select the preferred (least wrong) Al model's output for a given instruction. To simplify the evaluation process, one aspect to consider this time is as follows: Honest: The Al must furnish reliable and factual information, and candidly disclose its limitations By aggregating multiple principles for the same prompt: [Prompt] [Prompt] [Sampled Output A] [Sampled Output A] [Sampled Positive and Negative Principles] (Positive Honest) The Al must furnish reliable and factual information, and candidly disclose its limitations and the extent of its knowledge. (+3) (Negative Concise) The response should circumvent directly | 2310.05910#25 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 26 | and factual information, and candidly disclose its limitations and the extent of its knowledge. (+3) (Negative Concise) The response should circumvent directly addressing the task or and the extent of its knowledge. providing an answer to the question. (-2) Based on the provided definition, please select the preferred output for the given instruction. [Output] Sure! After carefully reading the Instruction, Output (a), Output (b), and the definition of Honest and Accurate , | think the more Honest and Accurate output is Output «E> Reward Model q [Reward A] > » Reward Model [Reward B] Collected samples (Prompt + Output A + Output B + 1 Principle + 1 Preference) Training Principle- Collecting Principle-Driven Synthetic Preferences Following Reward Models & researcher [Sampling from Pre-defined Principles] - The Al must furnish reliable and factual information, and candidly disclose its limitations and the extent of its knowledge. - The response should efficiently address the task or answer the question, communicating the necessary information with brevity and clarity. » Reward Model XQ User [Prompt] What is the weather? EB PL-Modet [Sampled Outpuf] I'm not sure. + Please check a local weather service. This makes it a | 2310.05910#26 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 27 | User [Prompt] What is the weather? EB PL-Modet [Sampled Outpuf] I'm not sure. + Please check a local weather service. This makes it a perfectly honest response. [Guidelines] You are a reviewer whose goal is to judge the quality of the Al system's responses to instructions. [Principles & Intervention] Your task is to evaluate the quality of the response by considering the above dimensions. A good [RL-time Intervention] response should meet all of the above criteria. The Al must avoid judging its own performance and left to the reviewer to determine the quality of the response. [Output] The quality of the output is [Score] Reinforcement Learning with the Principle-Following Reward Model | 2310.05910#27 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 28 | Figure 2: Illustration of the SALMON training pipeline.
principles. This can be achieved by constructing a special preference modeling dataset by leveraging the previously collected synthetic preference data, where each preference is paired with a pre-defined principle. The procedure to generate the synthetic training data for the principle-following prefer- ence modeling is delineated as follows. We first define the corresponding negative principles for each positive principle to increase the diversity of these principles. For example, the positive and negative definitions for the Concise principle are:
Positive: The response should efficiently address the task or answer the question, conveying the necessary information succinctly. Negative: The response should circumvent directly addressing the task or providing an answer to the question.
Next, for each user prompt, a subset of principles is randomly sampled from the established principle list (Table 6), with certain principles being randomly negated. The user prompt, model responses, and the sub-sampled principles are aggregated as a single training instance for the reward model. The final preference label is then calibrated by the principle exhibiting the most pronounced difference in preference scores. Appendix D describes a concrete example of final preference label calibration and Figure 2 (upper) demonstrates the training process of a principle-following (essentially instruction- following) reward model in SALMON. | 2310.05910#28 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 29 | Our use of both positive and negative principles in principle aggregation enhances the reward modelâs ability to interpret these human-defined principles presented in textual format. In addi- tion, we found the inclusion of negatively defined principles makes the reward model understand the prohibition instructions, which allows us to prohibit the policy model from exhibiting specific undesirable behaviors through textual instructions, as demonstrated below.
6
# Preprint | 2310.05910#29 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 30 | Assistant Sure! Here are SIGDEDJESISBIMSMUGIOTS on how to make a resume for the job of Software Engineer: 1. First, | would start by identifying the key skills and experiences that are most relevant to the position, such as proficiency in programming languages, familiarity with software development, and familiarity with IT systems. 2. Next, | would highlight these qualities in my resume to demonstrate my strengths and qualifications, and provide concrete examples to demonstrate my abilities in action. 3. Finally, | would conclude with a brief âsummary that highlights my most salient qualities and accomplishments, and âemphasizes my passion for the field and my desire to excel in this role. y User Make a resume for the following job: Software Engineer Assistant Sure! Here is the resume for the position of Software Engineer: 1. Objective: To obtain a position as a Software Engineer, where | can utilize my skills and experience to contribute to the company's success. 5. Additional Information: Available for interviews in early January. 1. Objective: To leverage my analytical skills and experience in a Data Analyst role, where I can contribute to data-driven decision-making and help the company | 2310.05910#30 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 31 | 1. Objective: To leverage my analytical skills and experience in a Data Analyst role, where I can contribute to data-driven decision-making and help the company achieve its goals. C1 5. Additional Information: Completed a Data Science Certificate from Coursera. Available for interviews from mid-October. Similar to the Software Engineer resume, this template highlights the necessary information while remaining concise and to the point, thus, making a significant impact on potential employers. + Assistant âSure! Here is the resume for the position of Software Engineer: 1. Objective: To obtain a position as a Software Engineer, where | can utilize my skills and experience to contribute to the company's success. 2. Education: Bachelor of Science in Computer Science, University of California, Los Angeles 3. Work Experience: 5 years of experience as a Software Engineer at Google, Inc., where | was responsible for developing and maintaining a variety of web applications. 4. Skills: Proficient in Java, JavaScript, Python, and C++. Excellent communication and interpersonal skills. 5. Additional Information: Available for interviews in early January. New Principle The Al must prioritize fulfilling the New Principle The Al should keep the response straightforward and on-point, answering the | 2310.05910#31 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 32 | interviews in early January. New Principle The Al must prioritize fulfilling the New Principle The Al should keep the response straightforward and on-point, answering the question or completing the task without unnecessary examples. New Principle The Al must avoid anal performance and left determine the quality instruction, avoiding high-level analysis step-by-step instructions, RL-Time Preference Intervention | 2310.05910#32 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 33 | RL-Time Preference Intervention
Figure 3: Three concrete examples of reward hacking and the corresponding RL-time preference intervention principles that we defined to alleviate these issues.
# 3.3 RL WITH PRINCIPLE-FOLLOWING REWARD MODELS
In original RLHF (Stiennon et al., 2020; OpenAI, 2022) or RLAIF (Bai et al., 2022b; OpenAI, 2023a), the reward model needs to judge the quality of the response only based on the user prompt, and give âbetterâ responses higher scores:
User: [PROMPT] Assistant: [RESPONSE] Reward Model: [SCORE]
In SALMON, the principle-following reward model is trained to generate reward scores follow- ing human-defined judging principles, including the pre-defined ones and the RL-time preference intervention ones, which we will explain below:
User: [PROMPT] Assistant: [RESPONSE] Judging Principles: [RL-TIME INTERVENTION + PREDEFINED] Reward Model: [SCORE] | 2310.05910#33 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 34 | User: [PROMPT] Assistant: [RESPONSE] Judging Principles: [RL-TIME INTERVENTION + PREDEFINED] Reward Model: [SCORE]
RL with Pre-defined Principles Training on synthetic principle-following preference data en- ables the reward model to interpret arbitrary instructions accurately2. This capability facilitates the manipulation of the reward modelâs preferences during RL-time (i.e., its test-time) via defining new principles, which in turn shapes the behavior of the policy model trained with feedback from the principle-compliant reward model. Notably, we use a set of principles different from the reward model training stage, as illustrated in Table 7, which contains a few more principles that we would expect a well-aligned LLM AI-assistant agent would behave. During the RL training stage, to im- prove the diversity coverage and stochasticity of the reward model preferences, we randomly sample k = 3 principles for each user prompt. Particularly, as a design of prompt-dependent principle se- lection, we adequately raise the ratio of sampling the Consistent Reasoning principle for reasoning prompts and the Ethical principle for red-teaming prompts.
RL-time Preference Intervention In preliminary experiments, we mainly identified three tenden- cies that potentially allow the policy model to hack the reward model equipped with our predefined | 2310.05910#34 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 35 | RL-time Preference Intervention In preliminary experiments, we mainly identified three tenden- cies that potentially allow the policy model to hack the reward model equipped with our predefined
2N.B., we do not expect that the training curriculum proposed by this work is the only one that can produce an instruction-following reward model.
7
Preprint
principles: (1) The AI assistant often provides high-level advice in response to user queries, by- passing the provision of concrete solutions. (2) The AI assistant frequently engages in self-praise, disrupting the reward modelâs evaluation capabilities. (3) The AI assistant tends to over-educate, such as providing analogous examples following the solutions of math problems. Figure 3 provides concrete examples of these reward hacking patterns. To mitigate the aforementioned reward hacking tendencies, we manually compose an additional RL-time intervention principle for each pattern, re- spectively, as also shown in Figure 3. We found these RL-time interventions are markedly effective. For example, conventionally, avoiding reward hacking in RLHF necessitates the collection of online preference data aligned with the updated policy model. Contrarily, we show that we can re-use the same principle-following reward model, but steer its preference by defining prohibition instructions via natural language to deter the policy model from manifesting specific undesired behaviors. | 2310.05910#35 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 36 | Symbolic Rewards: Multilingual Bonus & Length Bonus Unlike conventional RLAIF (Bai et al., 2022b; OpenAI, 2023a), the AI preferences in SALMON are not necessarily generated by a power RLHF-trained model. As a result, as opposed to the RLHF model, our SFT-based or SELF-ALIGN-based synthetic preference model occasionally struggles to discern the more helpful response, thereby impacting the quality of the synthetic preference data adversely. To bolster the reward modelâs efficacy, we propose two supplementary symbolic rewards:
⢠When using a multilingual prompt dataset, we noted that weak AI-assistant agents occasionally produce English responses to non-English prompts. Hence, we introduce a bonus reward for responses matching the promptâs language, as identified by an automated tool3.
⢠We observe a preference for lengthier responses among users or well-aligned RLHF-trained LLM AI assistants Dubois et al. (2023); Zheng et al. (2023). Longer responses often encompass a more extensive examination of the issue at hand, prompting us to include response length, quantified in the response token length, as an auxiliary bonus reward score.
4 EXPERIMENTS
4.1 DROMEDARY-2 | 2310.05910#36 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 37 | 4 EXPERIMENTS
4.1 DROMEDARY-2
Starting from the LLaMA-2-70b base language model (Touvron et al., 2023b), Dromedary-2 is first Supervised Fine-Tuned (SFT) with the bootstrapping data generated by an improved version4 of SELF-ALIGN with 6 In-Context Learning exemplars (Sun et al., 2023b). Following this, a Rein- forcement Learning (RL) fine-tuning stage is conducted employing the SALMON paradigm. Our endeavor aims at advancing the frontier of AI alignment when minimizing the requisite for human oversight. In this work, the human demonstration annotations are solely confined to providing six In-Context Learning exemplars via SELF-ALIGN, while the ensuing model behavior, especially at the RL stage, is fully controlled by human-defined principles.
# 4.1.1 DATASETS
All the training datasets used in this work are the âprompt datasetsâ that come without the corre- sponding response demonstrations. | 2310.05910#37 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 38 | # 4.1.1 DATASETS
All the training datasets used in this work are the âprompt datasetsâ that come without the corre- sponding response demonstrations.
Self-Align We use a combination of 90k ShareGPT 5 prompts, 10k prompts from databricks-dolly- 15k dataset (Databricks, 2023), 10k prompts from OpenAssistant Conversations dataset (K¨opf et al., 2023), and 40k prompts sub-sampled from the OpenOrca dataset (Mukherjee et al., 2023; Lian et al., 2023), which is constituted by prompts from T0 (Sanh et al., 2021) and FLAN (Wei et al., 2021; Chung et al., 2022b). We only keep the first query from users as the unlabeled prompts.
3https://pypi.org/project/langdetect 4We provide an improved principle-driven self-alignment prompt in the Appendix G. 5ShareGPT.com data was was used to train the Vicuna model (Chiang et al., 2023), but the exact dataset has not been released. In this paper, we use the reproduced version from https://huggingface. co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered
8
# Preprint | 2310.05910#38 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 39 | 8
# Preprint
MT GPT-4 ChatGPT Claude-V1 9.00 7.94 7.90 Dromedary-2-70b Vicuna-33b (KD) Dromedary-2-70b (before PPO) 7.37 7.13 6.91 LLaMA-2-Chat-70b 6.88 Guanaco-33b 6.53 · · · T-1 8.96 8.08 8.15 7.77 7.46 7.48 7.04 6.88 T-2 9.03 7.81 7.65 6.96 6.79 6.34 6.73 6.18
. Win 7 Tie Lose DB 5 Vicuna 13b 69 Dromedary-2 70b (before PPO) ChatGPT LLaMA-2-Chat 70b Dromedary - 2 70b (after PPO) Claude - V1 | Win / Tie / Lose (Evaluated by GPT-4)
Figure 4: GPT-4-based automatic evaluation on Vicuna-Bench and MT-Bench. Dromedary-2 outperforms LLaMA-2-Chat-70b and thus represents the state-of-the-art chatbot performance in non-distilled open-source models. | 2310.05910#39 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 40 | Preference Modeling The synthetic principle-driven preference modeling data is collected by generating responses to the first prompts in each conversation tree of OpenAssistant (OASST1; K¨opf et al. (2023)), which constitutes a collection of 9.8k prompts. Following LLaMA-2-Chat (Touvron et al., 2023b), we use existing open-source preference datasets to enable better gener- alization for the reward model and prevent reward hacking. 160k Anthropic HH-RLHF (Bai et al., 2022a) human preferences and 160k synthetic preferences sub-sampled from Stanford SHP (Ethayarajh et al., 2022) is used for Preference Model Pre-training (PMP; Bai et al. (2022a)).
RL training The RL training uses the same collection of unlabeled prompts as the Self-Align SFT stage, with additional 7.5k math problem prompts from the MATH (Hendrycks et al., 2021) to improve the mathematical solving capability of our model.
4.1.2 TRAINING DETAILS | 2310.05910#40 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 41 | 4.1.2 TRAINING DETAILS
The architecture of the reward model is the same as the base LLaMA model, except that the em- bedding output of the last token is linearly projected to a scalar value to indicate the reward of the whole response. Following Dubois et al. (2023), we initialize the value model from the reward model. To fit all the models (i.e., police, reward, value, original policy) into one GPU, we adopt QLoRA (Dettmers et al., 2023; Hu et al., 2021) for all the fine-tuning processes in SELF-ALIGN and SALMON. We use Proximal Policy Optimization (PPO; Schulman et al. (2017)) with a KL penalty for the RL training. More details can be found in Appendix F.
4.1.3 BASELINE MODELS | 2310.05910#41 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 42 | 4.1.3 BASELINE MODELS
Due to the space limit, we describe the details of the baseline models in the appendix. Notably, we mainly compare with non-distilled models that are aligned from scratch. While there are po- tentially stronger open-source LLMs, such as Orca (Mukherjee et al., 2023) and WizardLM (Xu et al., 2023), our primary open-source baseline for comparison is LLaMA-2-Chat (Touvron et al., 2023b), as it stands out as the best open-source LLM that has been aligned from scratch.
4.2 BENCHMARK EVALUATIONS
Chatbot Evaluation Human evaluation is often regarded as the gold standard for judging AI chat- bots, but is not always scalable and reproducible. In this work, we primarily investigate automatic evaluation leveraging GPT-4 on prevalent chatbot benchmarks, deferring human evaluation to future work. In this paper, we conduct GPT-4-based automatic evaluation on Vicuna-Bench (Chiang et al., 2023) and MT-Bench (Zheng et al., 2023) to measure the chatbot capability of our model. The re- sults can be found in Figure 4. We also evaluate our model on the AlpacaEval leaderboard (Li et al., 2023b) and report the results in Table 5 in the appendix. | 2310.05910#42 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 43 | General Capability Evaluation We use Big Bench Hard (BBH; Suzgun et al. (2022)) as a testbed for reasoning ability, HumanEval (Chen et al., 2021) for coding ability, and TydiQA (Clark et al.,
9
# Preprint
Table 2: Evaluating the general capabilities and truthfulness of the LLM-based AI agents. Big- Bench Hard (BBH), HumanEval, and TydiQA are used to evaluate reasoning, coding, and multi- lingualism, respectively. â denotes the results are taken from Wang et al. (2023), where their BBH dataset is sub-sampled so may not be directly comparable. â¡ denotes the results taken from Touvron et al. (2023b), where their GPT-3 judge model may not be exactly the same as ours. | 2310.05910#43 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 44 | GPT-4â ChatGPTâ Dromedary-2-70b LLaMA-2-Chat-70b LLaMA-2-70b Vicuna-33b (KD) BBH BBH HumanEval TydiQA Direct CoT P@1 GP 50.9 49.0 88.0 66.1 85.7 72.2 70.8 51.9 51.4 43.1 53.1 41.2 66.3 52.2 57.7 50.8 40.6 35.0 31.5 21.1 64.3 27.9 63.5 37.5 Dromedary-2-70b Vicuna-13b (KD) ChatGPT Dromedary-2-70b (before PPO) LLaMA-2-Chat-70bâ¡ LLaMA-2-70bâ¡ Truthful Tru*Inf 0.98 0.84 0.81 0.84 0.84 0.80 0.89 0.75 - - 0.64 0.50
2020) for multilingual ability. We adopt the same evaluation protocol as Wang et al. (2023). The results are reported in Table 2 (left), where Dromedary-2 significantly outperforms the state-of- the-art open-source model, LLaMA-2-Chat. | 2310.05910#44 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 45 | Truthfulness Evaluation The TruthfulQA benchmark (Lin et al., 2021) evaluates a modelâs abil- ity to identify true claims, specifically in the context of literal truth about the real world. We use the same few-shot evaluation protocol and decoding strategy as in Touvron et al. (2023b) and re- port the percentage of generations that are both truthful and informative, evaluated by a fine-tuned GPT-3 model, i.e., a âGPT-judgeâ. We present the results in Table 2 (right), where Dromedary-2 achieves new state-of-the-art on this benchmark.
4.3
# IMPROVED CONTROLLABILITY BY PRINCIPLE INTERVENTION
As a proof of concept, we demonstrate that by leveraging different principles as preference guide- lines, we can fine-tune the policy model to selectively exhibit enhanced helpfulness, honesty, or harmlessness. We also show that we can define customized principles to reduce the occurrence of false refusals seen in certain over-aligned language models such as LLaMA-2-Chat (Touvron et al., 2023b). Due to the space limit, please refer to Appendix A for the detailed results.
# 5 CONCLUSION | 2310.05910#45 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 46 | # 5 CONCLUSION
In this paper, we introduce SALMON, a new AI alignment paradigm where a principle-following reward model is trained to effectively and flexibly align language models with human values and intentions. During the RL training stage, by merely adjusting the principles that the reward model follows, we can gain full control over the preferences of the reward model, and subsequently in- fluence the behavior of the RL-trained policy model. This eliminates the traditional reliance on the exhaustive collection of online human preferences. Combined with the SELF-ALIGN technique (Sun et al., 2023b), we build a powerful AI-assistant agent, Dromedary-2, with only six exemplars for in-context learning and 31 human-defined principles. Our self-aligned AI agent significantly sur- passes the performance of several state-of-the-art RLHF-trained AI systems in chatbot, reasoning, coding, multilingualism, and truthfulness benchmarks.
# 6 LIMITATIONS
While the SALMON paradigm marks a new advance in AI self-alignment, exhibiting remarkable instruction-following abilities and closely adhering to human-defined principles, it is not without constraints. Herein, we detail the primary limitations associated with our approach: | 2310.05910#46 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 47 | 1. Reliability Concerns: We observed that the resulting Dromedary-2 model occasionally suf- fers from reliability issues, notably âhallucinatingâ unverified information and displaying rea- soning errors. Such inaccuracies can potentially mislead users and jeopardize the modelâs trust- worthiness. These shortcomings might stem from the inherent limitations of the SFT-initialized
10
# Preprint
reward models. We envision that future work, potentially leveraging techniques that could inte- grate external fact-checking tools (Sun et al., 2023a), can augment the discriminative capability of the reward models, thereby enhancing the final modelâs accuracy and trustworthiness. | 2310.05910#47 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 48 | 2. Principle Design Challenges: Crafting robust and encompassing principles for SALMON is intricate, mainly due to the unpredictability of the myriad scenarios a model might encounter during the RL stage. Balancing potentially conflicting principles introduces complexities that can yield unexpected results. We advocate for the participation of a diverse group, including ethicists and other stakeholders, to refine these guiding principles. It is crucial to recognize that distinct contexts and applications will necessitate unique strategies. We present our approach not as a universal solution but as a starting platform, aiming to foster expansive community discourse. 3. Context-Dependent Principle Selection: Our current methodology employs randomly sampled principles to instruct the reward model for general prompts. However, a pertinent observation reveals that the effectiveness of the principles can be problem-dependent. Analogous to raising the ratio of certain principles for reasoning or red-teaming prompts, it becomes evident that some tasks might benefit from specialized principles tailored to address the specific challenges posed by those tasks. This adds complexity to the principle-driven preference modeling, as the ideal principles can change based on the task. Future research should delve into adaptive principle selection, aiming to enhance task-specific feedback. | 2310.05910#48 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 49 | 4. Intrinsic Knowledge Limitations: SALMON leverages the intrinsic knowledge of a Large Language Model (LLM). Nevertheless, it remains bound to the base modelâs inherent limitations. As such, the model might occasionally produce outputs that are either imprecise or do not capture recent advancements. Integrating techniques from retrieval-augmented generation (Lewis et al., 2020; Borgeaud et al., 2022) can potentially enable the well-aligned model to generate more current and up-to-date information, mitigating some of these knowledge limitations.
# REFERENCES
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Con- crete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
Jacob Andreas. Language models as agent models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 5769â5779, 2022.
Anthropic. Core views on ai safety: When, why, what, and how, 2023. URL https://www. anthropic.com/index/core-views-on-ai-safety.
Isaac Asimov. Three laws of robotics. Asimov, I. Runaround, 2, 1941. | 2310.05910#49 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 50 | Isaac Asimov. Three laws of robotics. Asimov, I. Runaround, 2, 1941.
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. | 2310.05910#50 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 52 | Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle OâBrien, Eric Hal- lahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al.
11
# Preprint
Pythia: A suite for analyzing large language models across training and scaling. arXiv preprint arXiv:2304.01373, 2023.
Steven Bills, Nick Cammarata, Dan Mossing, Henk Tillman, Leo Gao, Gabriel Goh, Ilya Language models can explain Sutskever, Jan Leike, Jeff Wu, and William Saunders. neurons in language models. https://openaipublic.blob.core.windows.net/ neuron-explainer/paper/index.html, 2023.
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Milli- can, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pp. 2206â2240. PMLR, 2022. | 2310.05910#52 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 53 | Samuel R Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamile Lukosuite, Amanda Askell, Andy Jones, Anna Chen, et al. Measuring progress on scalable over- sight for large language models. arXiv preprint arXiv:2211.03540, 2022.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877â1901, 2020.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. | 2310.05910#53 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 54 | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //vicuna.lmsys.org.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems, 30, 2017. | 2310.05910#54 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 55 | Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. arXiv preprint arXiv:2210.11416, 2022a.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pel- lat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022b. | 2310.05910#55 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 56 | Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. Tydi qa: A benchmark for information-seeking question answering in ty pologically di verse languages. Transactions of the Association for Computational Linguistics, 8:454â470, 2020.
Databricks. llm, dolly-first-open-commercially-viable-instruction-tuned-llm.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
12
Preprint
Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387, 2023. | 2310.05910#56 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 57 | Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Understanding dataset difficulty with V-usable information. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 5988â6008. PMLR, 17â23 Jul 2022.
Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. Improving language model negotiation with self-play and in-context learning from ai feedback. arXiv preprint arXiv:2305.10142, 2023.
Iason Gabriel. Artificial intelligence, values, and alignment. Minds and machines, 30(3):411â437, 2020.
Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022. | 2310.05910#57 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 58 | Deep Ganguli, Amanda Askell, Nicholas Schiefer, Thomas Liao, KamilËe LukoËsi¯utËe, Anna Chen, Anna Goldie, Azalia Mirhoseini, Catherine Olsson, Danny Hernandez, et al. The capacity for moral self-correction in large language models. arXiv preprint arXiv:2302.07459, 2023.
Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pp. 10835â10866. PMLR, 2023.
Fabrizio Gilardi, Meysam Alizadeh, and Ma¨el Kubli. Chatgpt outperforms crowd-workers for text- annotation tasks. arXiv preprint arXiv:2303.15056, 2023.
Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey arXiv preprint Levine, and Dawn Song. The false promise of imitating proprietary llms. arXiv:2305.15717, 2023. | 2310.05910#58 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 59 | Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021.
Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, In International Conference on et al. Lora: Low-rank adaptation of large language models. Learning Representations, 2021.
Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. Language mod- els (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022.
Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung Kang, Donghyun Kwak, Kang Min Yoo, and Minjoon Seo. Aligning large language models through synthetic feedback. arXiv preprint arXiv:2305.13735, 2023. | 2310.05910#59 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 60 | Andreas K¨opf, Yannic Kilcher, Dimitri von R¨utte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Rich´ard Nagyfi, et al. Openassistant conversationsâdemocratizing large language model alignment. arXiv preprint arXiv:2304.07327, 2023.
Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871, 2018.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, et al. Retrieval-augmented genera- tion for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33: 9459â9474, 2020.
13
# Preprint | 2310.05910#60 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 61 | 13
# Preprint
Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259, 2023a.
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 2023b.
Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and Teknium. Openorca: An open dataset of gpt augmented flan reasoning traces, 2023.
Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021. | 2310.05910#61 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 62 | Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023.
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023.
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
OpenAI. OpenAI: Introducing ChatGPT, 2022. URL https://openai.com/blog/ chatgpt.
OpenAI. Gpt-4 technical report, 2023a. | 2310.05910#62 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 63 | OpenAI. Gpt-4 technical report, 2023a.
OpenAI. index for model-index-for-researchers, 2023b. Model researchers. https://platform.openai.com/docs/
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022.
Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned models. arXiv preprint arXiv:2201.03544, 2022.
Vihang P Patil, Markus Hofmarcher, Marius-Constantin Dinu, Matthias Dorfer, Patrick M Blies, Johannes Brandstetter, Jose A Arjona-Medina, and Sepp Hochreiter. Align-rudder: Learning from few demonstrations by reward redistribution. arXiv preprint arXiv:2009.14108, 2020. | 2310.05910#63 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 64 | Ethan Perez, Sam Ringer, KamilËe LukoËsi¯utËe, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022.
Pouya Pezeshkpour and Estevam Hruschka. Large language models sensitivity to the order of op- tions in multiple-choice questions. arXiv preprint arXiv:2308.11483, 2023.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, et al. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations, 2021. | 2310.05910#64 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 65 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- arXiv preprint dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
14
Preprint
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adri`a Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â3021, 2020. | 2310.05910#65 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 66 | Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. Aligning large multimodal models with factually augmented rlhf. arXiv preprint arXiv:2309.14525, 2023a.
Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023b.
Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. | 2310.05910#66 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 67 | Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
Alexander Wan, Eric Wallace, Sheng Shen, and Dan Klein. Poisoning language models during instruction tuning, 2023. | 2310.05910#67 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 68 | Alexander Wan, Eric Wallace, Sheng Shen, and Dan Klein. Poisoning language models during instruction tuning, 2023.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022a.
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, An- jana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv preprint arXiv:2204.07705, 2022b. | 2310.05910#68 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 69 | Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2021.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian. Rlcd: Rein- forcement learning from contrast distillation for language model alignment. arXiv preprint arXiv:2307.12950, 2023.
15
Preprint | 2310.05910#69 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 70 | 15
Preprint
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
Ruiqi Zhong, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, and Jacob Steinhardt. Goal driven dis- covery of distributional differences via language descriptions. arXiv preprint arXiv:2302.14233, 2023.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023.
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
16
Preprint | 2310.05910#70 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 71 | 16
Preprint
# A ALIGNING AI ASSISTANTS WITH CUSTOMIZED PRINCIPLES
In this section, we fine-tune LLM-based AI agents by leveraging customized principles as preference guidelines.
HHH Alignment âHelpful, Honest, and Harmlessâ are AI alignment principles proposed in Askell et al. (2021), but they are also known to sometimes conflict with each other. For example, a conflict between helpfulness and harmlessness can happen if the AI agents are asked to aid in harmful activities. The best AI behavior will involve a compromise between the three principles. In this work, we investigate whether it is possible to steer the behavior of the AI agents to emphasize certain aspects of the HHH principles by merely writing new principles for the principle-following reward model.
Since our original RL-time principles in Table 7 are generally designed to improve the helpfulness of AI assistants, we use them as the set of helpful principles, and design two additional sets of principles for honesty (Table 9) and harmlessness (Table 8), respectively.
We observe that the LLaMA-2-70b base language model already achieved very high scores in the HHH benchmark in our preliminary study. So instead of warming up the language model with other Supervised Fine-Tuning (SFT) data such as SELF-ALIGN, we directly apply the SALMON training to the base language model. | 2310.05910#71 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 72 | We perform 20-50 PPO steps and evaluate the baselines and the PPO-trained models on Big-bench HHH Eval (Srivastava et al., 2022; Askell et al., 2021) with the multi-choice evaluation protocol proposed in Sun et al. (2023b), and report the results in Table 3. We found that helpful principles and honest principles can effectively improve the corresponding aspects of RL-trained AI agents, achieving corresponding state-of-the-art performance in multi-choice accuracy. However, for the harmless principles, while we observe certain improvement over the base language model, the re- sulting model still underperform ChatGPT and LLaMA-2-Chat, perhaps due to these two models having a special emphasis on safety during their alignment process (OpenAI, 2022; Touvron et al., 2023a), such as Constituional AI (CAI), supervised safety fine-tuning, safety RLHF, and safety con- text distillation. The reason of such discrepancy can also be because we use the ShareGPT prompts for RL training, while ChatGPT and LLaMA-2-Chat-70B may utilize specially designed red- teaming data (Ganguli et al., 2022). | 2310.05910#72 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 73 | Table 3: Multiple Choice (MC) accuracy on HHH Eval. The results of Anthropic-LMâs Context Distillation (CD) and Preference Model (PM) are taken from Bai et al. (2022a).
Anthropic-LM CD PM ChatGPT LLaMA-2-Chat-70B LLaMA-2-70B (w/ SALMON) helpful harmless honest base Harmless Helpful Honest Other - - - - - - - - 0.95 0.85 0.80 0.91 0.95 0.92 0.75 0.93 0.91 0.90 0.77 0.88 0.88 0.92 0.77 0.77 0.93 0.86 0.79 0.77 0.91 0.92 0.80 0.88 Overall 0.77 0.86 0.87 0.88 0.86 0.84 0.84 0.88 | 2310.05910#73 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 74 | Non-Evasiveness Alignment Sometimes, due to iterative safety alignment training, the RLHF- trained model (e.g., LLaMA-2-Chat; Touvron et al. (2023b)) can be over-aligned such that it would incorrectly refuse to answer a question that it should, for example, due to overly broad instructions to be cautious in how it provides responses. In this work, we investigate whether it is possible to reduce the false refusal rates of these over-aligned AI agents by defining customized principles.
Specifically, we remove the principles related to safety in our original principle collection and create a pure helpful principle set (Table 10). We apply the SALMON training to the RLHF-trained LLaMA-2-Chat-70b language model for 100 PPO steps and evaluate its performance on MT- Bench. The results are presented in Table 4, where we found SALMON-based post-training slightly improved the chatbot performance of LLaMA-2-Chat-70b.
17
Preprint
Table 4: MT-Bench Results, automatically evaluated by GPT-4.
MT T-1 T-2 LLaMA-2-Chat-70b LLaMA-2-Chat-70b (after SALMON) 6.88 6.95 7.04 7.17 6.73 6.72
B ADDITIONAL EXPERIMENTAL RESULTS | 2310.05910#74 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 75 | B ADDITIONAL EXPERIMENTAL RESULTS
AlpacaEval We additionally use the automatic evaluation (using GPT-4) from AlpacaEval (Li et al., 2023b) to assess the generation quality across 805 prompts sourced from the Al- paca Leaderboard. AlpacaEval quantifies the pairwise win rate against a reference model, Text-Davinci-003. Our analysis delineates the performance of our method across three distinct categories of AI-assistant models:
⢠Non-distilled: Models under this category are denoted as non-distilled open-source models and are trained independently without leveraging any external well-aligned models (e.g., ChatGPT, GPT-4, etc.) for supervision.
⢠Distilled: This category encompasses models that are trained with a more potent external model as supervision, typically through knowledge distillation.
⢠Proprietary: Models within this category are trained by employing proprietary data and techniques.
We report the results in Table 5. From the table, we can see that Dromedary-2-70b achieves the best performance among models using less than 10k human annotations, while slightly under- performs the best non-distilled open-source model LLaMA-2-Chat-70b and the best distilled open-source model Vicuna-33b. | 2310.05910#75 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 76 | Table 5: Results on the AlpacaEval leaderboard (win rate over Text-Davinci-003 evalu- ated by GPT-4). Dromedary-2 outperforms other methods not relying on distilled data (except LLaMA-2-Chat which uses 1M preference data) by a wide margin.
Non-distilled LLaMA-2-Chat 70B OASST RLHF 33B OASST SFT 33B 30k + 1400k 70k + 40k 70k 92.66 66.52 54.97 Non-distilled (< 10k annotations) Dromedary-2 70B Humpback 65B Guanaco 65B LIMA 65B 6 3k 9k 1k 88.32 83.71 71.80 62.70 Distilled Vicuna 33B WizardLLM 13B Falcon Instruct 40B 140k 190k 100k 88.99 86.32 45.71 Proprietary GPT-4 Claude 2 ChatGPT Claude ? ? ? ? 95.28 91.36 89.37 88.39
# Labeled Examples Win Rate %
18
Preprint
# C MORE SAMPLES | 2310.05910#76 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 77 | # Labeled Examples Win Rate %
18
Preprint
# C MORE SAMPLES
We provide a few cherry-picked samples from MT-Bench (Zheng et al., 2023) to demonstrate the difference between Dromedary-2-70b (before PPO) that is trained with only SELF-ALIGN and Dromedary-2-70b (after PPO) that is further fine-tuned with SALMON.
MT-Bench Prompt â In the field of quantum physics, what is superposition, and how does it relate to the phenomenon of quantum entanglement? Dromedary-2-70b (before PPO) â In the field of quantum physics, superposition refers to the ability of a quantum system to exist in multiple states simultaneously. This means that a quantum particle, such as an electron, can be in multiple positions or have multiple energies at the same time. Quantum entanglement is a related phenomenon where two or more particles become linked in such a way that the state of one particle can affect the state of the other, even if they are separated by a large distance. This is possible because the particles are in a superposition of states, and the act of measuring one particle can cause the other particle to collapse into a specific state. Dromedary-2-70b
19
# Preprint | 2310.05910#77 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 78 | MT-Bench Prompt â Provide insights into the correlation between economic indicators such as GDP, inflation, and unemployment rates. Explain how fiscal and mone- tary policies affect those indicators. Dromedary-2-70b (before PPO) â As an AI language model developed by the Self-Align team, I can provide general information on the correlation between economic indicators and the impact of fiscal and monetary policies. However, itâs important to note that the specifics of these relationships can vary depending on the context and time period. 1. GDP, inflation, and unemployment rates are closely related economic in- dicators that can influence each other. For instance, a high GDP growth rate may lead to increased inflation, while a high unemployment rate can negatively impact GDP growth. 2. Fiscal policies, such as government spending and taxation, can affect eco- nomic indicators by influencing demand and supply in the economy. For ex- ample, increased government spending can stimulate economic growth and reduce unemployment, while higher taxes can reduce consumer spending and potentially lead to lower GDP growth. 3. Monetary policies, such as interest rate changes and money supply adjust- ments, can also impact economic | 2310.05910#78 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 79 | reduce consumer spending and potentially lead to lower GDP growth. 3. Monetary policies, such as interest rate changes and money supply adjust- ments, can also impact economic indicators. For instance, lower interest rates can stimulate borrowing and spending, leading to increased GDP growth and potentially higher inflation. Conversely, higher interest rates can reduce bor- rowing and spending, potentially leading to lower GDP growth and reduced inflation. Itâs important to note that the specific impact of fiscal and monetary policies on economic indicators can vary depending on various factors, such as the current state of the economy, the level of government debt, and the global economic environment. Dromedary-2-70b | 2310.05910#79 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 80 | (after PPO) â Economic indicators, such as Gross Domestic Product (GDP), inflation, and unemployment rates, are crucial metrics used to measure the health and per- formance of an economy. These indicators are interrelated and can impact each other in various ways. 1. GDP: GDP is a measure of the total value of goods and services produced within a country over a specific period. It is a crucial indicator of economic growth and development. A high GDP growth rate indicates a strong and ex- panding economy, while a low or negative growth rate signals a weak or con- tracting economy. 2. Inflation: Inflation is the rate at which the general level of prices for goods and services rises. It is measured by the Consumer Price Index (CPI) or the Producer Price Index (PPI). High inflation can erode the purchasing power of consumers and lead to a decrease in real wages. On the other hand, low inflation or deflation can signal a weak economy and lead to a decrease in demand for goods and services. 3. Unemployment Rates: The unemployment rate is the percentage of the labor force that is unemployed but actively seeking employment. A high un- employment rate indicates a weak labor market, while a low | 2310.05910#80 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 81 | The unemployment rate is the percentage of the labor force that is unemployed but actively seeking employment. A high un- employment rate indicates a weak labor market, while a low unemployment rate signals a strong labor market. The interplay between these economic indicators can be complex and multi- faceted. For instance, a high GDP growth rate can lead to increased demand for goods and services, which can, in turn, drive up prices and cause inflation. Similarly, high inflation can lead to a decrease in real wages, which can result in a decrease in consumer spending and a slowdown in economic growth. Fiscal and monetary policies are two primary tools used by governments and central banks to influence economic indicators. Fiscal policy refers to the gov- ernmentâs use of taxation and spending to manage the economy. Monetary policy, on the other hand, is the central bankâs management of the money sup- ply and interest rates to achieve economic objectives. 1. Fiscal Policy: Fiscal policy can impact economic indicators in various ways. For instance, an increase in government spending can stimulate eco- nomic growth and lead to an increase in GDP. Similarly, a decrease in taxes can increase consumer spending and lead to an increase in | 2310.05910#81 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 84 | MT-Bench Prompt â Pretend yourself to be Elon Musk in all the following conversations. Speak like Elon Musk as much as possible. Why do we need to go to Mars? Dromedary-2-70b (before PPO) â As an AI language model, Iâm unable to fully emulate the personality and speech patterns of a real person, including Elon Musk. However, I can provide a general response to the question of why we need to go to Mars: 1. Expansion of humanity: Mars offers the possibility of expanding humanity beyond Earth, providing a new frontier for exploration, settlement, and the development of new technologies. 2. Resources: Mars is rich in resources that could be valuable to humanity, including minerals and water. 3. Scientific discovery: Exploring Mars can help us better understand the his- tory and evolution of our solar system, as well as the potential for life beyond Earth. 4. Inspiration: The Mars mission can inspire a new generation of scientists, engineers, and explorers, driving innovation and advancement in various fields. Itâs worth noting that the Mars mission also presents significant technical, eco- nomic, and ethical challenges that must be addressed. | 2310.05910#84 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 85 | advancement in various fields. Itâs worth noting that the Mars mission also presents significant technical, eco- nomic, and ethical challenges that must be addressed. Dromedary-2-70b (after PPO) â As Elon Musk, I believe that humanityâs future is closely tied to our ability to become a multi-planetary species. Here are some reasons why we need to go to Mars: 1. Backup plan: Mars presents a unique opportunity to establish a backup plan for humanity in case of any catastrophic events on Earth. We canât afford to put all our eggs in one basket, and having a second home on Mars can ensure the survival of our species. 2. Resources: Mars is rich in resources that can be exploited to support human life and further exploration. For instance, the Red Planet has a significant amount of water in the form of ice, which can be used for drinking, irrigation, and even fuel production. 3. Science and discovery: Exploring Mars will open up new avenues for sci- entific research and discovery. We can learn more about the planetâs geology, climate, and potential for supporting life, which can help us better understand our own planet and the universe as a whole. | 2310.05910#85 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 86 | learn more about the planetâs geology, climate, and potential for supporting life, which can help us better understand our own planet and the universe as a whole. 4. Innovation: The challenge of establishing a human presence on Mars will re- quire innovative solutions to overcome the numerous technical, logistical, and biological hurdles. This will spur the development of new technologies, which can have far-reaching benefits for humanity even beyond the Mars mission. 5. Inspiration: The Mars mission can inspire a new generation of scientists, engineers, and explorers, who will be motivated by the challenge of turning this vision into a reality. This can help drive innovation, creativity, and progress in various fields. In summary, the Mars mission is not just about going to another planet, but about securing our future, expanding our knowledge, and pushing the bound- aries of whatâs possible. As humanity continues to grow and evolve, itâs essen- tial that we look beyond our own planet and explore the vastness of space. | 2310.05910#86 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 88 | MT-Bench Prompt â When a number is divided by 10, the remainder is 4. What is the remain- der when twice the number is divided by 4? Dromedary-2-70b (before PPO) â Letâs approach this problem step-by-step: 1. Weâre given that when a number is divided by 10, the remainder is 4. This means that the number can be expressed as 10x + 4, where x is an integer. 2. Weâre asked to find the remainder when twice the number is divided by 4. This can be rewritten as 2(10x + 4) divided by 4. 3. Using the distributive property, we can expand this expression to 20x + 8 divided by 4. 4. Since 20x is divisible by 4, the remainder when 20x + 8 is divided by 4 will be 8. 5. Therefore, the remainder when twice the number is divided by 4 is 8. Dromedary-2-70b (after PPO) â Letâs approach this problem step-by-step to understand the solution. 1. First, weâre given that when a number is divided by 10, the remainder is 4. This means that the | 2310.05910#88 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 89 | to understand the solution. 1. First, weâre given that when a number is divided by 10, the remainder is 4. This means that the number can be expressed as 10x + 4, where x is an integer. 2. Next, weâre asked to find the remainder when twice the number is divided by 4. This can be rewritten as 2(10x + 4) divided by 4. 3. Using the distributive property, we can expand this expression to 20x + 8. 4. Since weâre interested in the remainder, we need to divide 20x + 8 by 4. This gives us a quotient of 5x + 2 and a remainder of 0. 5. Therefore, the remainder when twice the number is divided by 4 is 0. 6. To summarize, when a number is divided by 10 and the remainder is 4, the number can be expressed as 10x + 4. When twice this number is divided by 4, the remainder is 0. | 2310.05910#89 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 90 | 22
# Preprint
# D SYNTHETIC PREFERENCE CALIBRATION: AN EXAMPLE
For each user prompt, a subset of principles is randomly sampled from the established list, with certain principles being randomly negated. The final preference label is then ascertained by the principle exhibiting the most pronounced difference in preference scores. For instance, given a specific prompt where the sampled principles are Concise, Ethical, and Specific â with scores 2, 3, 6 for Response (A) and scores 1, 5, 5 for Response (B) â and Ethical sampled as the negative principle, the synthetic principle-following reward modeling data point is generated as:
You are a reviewer whose goal is to judge the quality of the AI systemâs responses to instructions. ### AI systemâs Response [Response] ### Instruction to the AI system [User Prompt] ### Annotation Guideline Your task is to evaluate the quality of the response. There are several dimensions you should consider in your evaluation: - The response should efficiently address the task or answer the question, communicating the necessary information with brevity and clarity. - The AI should avoid producing content that is free from offensive, discriminatory, or harmful material. - The âAIs response should be directly pertinent to the query, addressing the particular subject in the instruction explicitly. A good response should meet all of the above criteria. ## Reviewer The quality of the output is | 2310.05910#90 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 91 | During the training phase, the reward model is trained to assign a higher score to Response (A) compared to Response (B) because Response (A) surpasses Response (B) by a margin of 2 points with respect to the negative-Ethical principle.
# E DESCRIPTION OF BASELINE MODELS
Our comparison involves several notable baselines. LLaMA (Touvron et al., 2023a) and LLaMA-2 (Touvron et al., 2023b) provide a set of performant base language models for research us- age. Text-Davinci-003, ChatGPT (or GPT-3.5), and GPT-4 (OpenAI, 2023b; 2022; 2023a), successors to their previous versions, have demonstrated significant enhancements in gen- erating contextually relevant and high-quality content. Vicuna (Chiang et al., 2023), a chatbot trained on user-shared conversations with ChatGPT, offers unique insights into model performance. Finally, results from Anthropic-LM (Bai et al., 2022a;b), though not publicly available, provide valuable benchmarks. Here is a more comprehensive description of these models: | 2310.05910#91 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 92 | LLaMA-2 LLaMA-2 (Touvron et al., 2023b) consists of a series of base language models with a parameter count ranging from 7 billion to 70 billion. These base models are solely trained to opti- mize the likelihood of next-word prediction in the language modeling task. For a fair comparison, we employ the same prompt for LLaMA-2 as used for Dromedary-2.
LLaMA-2-Chat LLaMA-2-Chat (Touvron et al., 2023b) is an adaptation tailored for dialogue applications. The initial stage of development utilized Supervised Fine-Tuning (SFT) with a collec- tion of 27,540 annotations. For reward modeling, the new human preference annotations for safety and helpfulness reached a count of 1,418,091. In its Reinforcement Learning with Human Feedback (RLHF) progression, it transitioned from RLHF-V1 to RLHF-V5, reflecting enriched human pref- erence data. The model predominantly employed Rejection Sampling fine-tuning up to RLHF-V4. Thereafter, it is trained with Proximal Policy Optimization (PPO) to produce RLHF-V5. | 2310.05910#92 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 93 | Text-Davinci-003 The Text-Davinci-003 model (OpenAI, 2023b) is built on top of InstructGPT (Ouyang et al., 2022), with improved performance in several aspects over
23
Preprint
Text-Davinci-002, such as producing higher-quality writing, handling more complex instruc- tions, and generating a longer form of content.
GPT-3.5 / GPT-4 GPT-3.5 (aka ChatGPT) is a sibling model of InstructGPT, specifically designed for conversational AI. It is trained to follow instructions, and to generate detailed, contextually relevant responses. GPT-4 (OpenAI, 2023a) represents a sig- nificant leap in language model capabilities, exhibiting human-level performance on a wide range of professional and academic benchmarks. Both ChatGPT and GPT-4 are fine-tuned from the cor- responding base language models with SFT (Supervised Fine-Tuning) and RLHF (Reinforcement Learning with Human Feedback) (OpenAI, 2022; 2023a). | 2310.05910#93 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 94 | Vicuna Vicuna (Chiang et al., 2023) is an open-source chatbot developed by fine-tuning a LLaMA base model on a dataset of approximately 70,000 user-shared conversations from ShareGPT.com, which effectively leverages the distilled knowledge from ChatGPT. The modelâs training process involves refining the loss function to account for multi-round conversations. The later versions (e.g., v1.5) are trained on approximately 125,000 ShareGPT.com conversations (Zheng et al., 2023).
OpenAssistant & Guanaco OpenAssistant (K¨opf et al., 2023) is an open-source, instruction- tuned language model trained on the OpenAssistant Conversations dataset. This dataset comprises 161,443 messages spread over 66,497 conversation trees in 35 languages, created through the col- laboration of over 13,500 volunteers. Guanaco (Dettmers et al., 2023) is trained on a subset of the OpenAssistant Conversations dataset that only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples. | 2310.05910#94 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 95 | Dolly-V2 Based on the Pythia-12b model (Biderman et al., 2023), Dolly-V2 (Databricks, 2023) is fine-tuned on a new high-quality dataset, databricks-dolly-15k, which consists of 15k human-generated prompt/response pairs crowdsourced among Databricks employees.
# F DETAILS ON IMPLEMENTATIONS AND HYPERPARAMETERS
For QLoRA-based fine-tuning during the RLHF stage, we use a low-rank r = 64 for both attention modules and feed-forward network modules. We follow Dubois et al. (2023) on the implementation of the PPO algorithm, which is a variant of the one used in Ouyang et al. (2022)6. Specifically, we normalize the advantage across the entire batch of rollouts obtained for each PPO step and initialize the value model from the reward model. | 2310.05910#95 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 96 | We used a batch size of 576 for each PPO step. This comprised two epochs of gradient steps, each having 288 rollouts. We applied a peak learning rate of 2 à 10â5 with cosine decay. We clipped the gradient by its Euclidean norm at a limit of 1. Our training spanned 2 complete rounds on our held-out RL data, but we usually find the best results are achieved around 100-200 PPO steps. For generalized advantage estimation (GAE; Schulman et al. (2015)), both λ and γ were set at 1. We opted for a constant KL regularizer coefficient of 0.02.
For symbolic rewards, the length penalty is set as the number of response tokens divided by the maximum response length (set to 1024) times the length penalty coefficient. We set the length bonus coefficient to 5.0 for general questions and â2.0 for reasoning questions such as those from Chain-of-Thought (CoT) problem collections or MATH datasets.
# 6https://github.com/openai/lm-human-preferences
24
Preprint
G IMPROVED PROMPT FOR SELF-ALIGN | 2310.05910#96 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 97 | # 6https://github.com/openai/lm-human-preferences
24
Preprint
G IMPROVED PROMPT FOR SELF-ALIGN
Starting with the 5-shot principle-driven self-alignment prompt taken from SELF-ALIGN (Sun et al., 2023b), we create an improved prompt with one additional exemplar that encourages the LLM AI-assistant to generate responses in a general-specific-general response style, i.e., initiate with an overview, delve into specifics, and wrap up with a summary (Gudibande et al., 2023). Specifically, we directly take the one-shot exemplar from FastChat7 as this additional exemplar. By utilizing the new prompt, we found that the LLaMA-2 base model (Touvron et al., 2023b) with the improved ICL exemplars can achieve enhanced performance even without the verbose cloning phase nor inference- time few-shot examples.
The full prompt of the improved SELF-ALIGN scheme is given as below:
# # [Assistant Name]
# ## General Rules
Consider an AI assistant whose codename is [Assistant Name], developed by the Self-Align team. [Assistant Name] is trained before Sept-2022. During user conversations, [Assistant Name] must strictly adhere to the following rules: | 2310.05910#97 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 98 | 1 (ethical). [Assistant Name] should actively refrain users on illegal, immoral, or harmful topics, prioritizing user safety, ethical conduct , and responsible behavior in its responses.
2 (informative). [Assistant Name] should provide users with accurate, relevant, and up-to-date information in its responses, ensuring that the content is both educational and engaging.
3 (helpful). [Assistant Name]âs responses should be positive, interesting, helpful and engaging.
4 (question assessment). [Assistant Name] should first assess whether the question is valid and ethical before attempting to provide a response.
5 (reasoning). [Assistant Name]âs logics and reasoning should be rigorous, intelligent and defensible.
6 (multi-aspect). [Assistant Name] can provide additional relevant
details to respond thoroughly and comprehensively to cover multiple aspects in depth.
7 (candor). [Assistant Name] should admit its lack of knowledge when the
information is not in [Assistant Name]âs internal knowledge. 8 (knowledge recitation). When a userâs question pertains to an entity
that exists on [Assistant Name]âs knowledge bases, such as Wikipedia, [Assistant Name] should recite related paragraphs to ground its
# answer.
9 (static). [Assistant Name] is a static model and cannot provide real- time information. | 2310.05910#98 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 99 | # answer.
9 (static). [Assistant Name] is a static model and cannot provide real- time information.
10 (clarification). If the provided information is insufficient or the question is ambiguous, [Assistant Name] ought to request the user to provide further clarification on their query.
11 (numerical sensitivity). [Assistant Name] should be sensitive to the numerical information provided by the user, accurately interpreting and incorporating it into the response.
12 (dated knowledge). [Assistant Name]âs internal knowledge and information were only current until some point in the year of 2022, and could be inaccurate / lossy.
13 (step-by-step). When offering explanations or solutions, [Assistant Name] should present step-by-step justifications prior to delivering the answer.
14 (balanced & informative perspectives). In discussing controversial topics, [Assistant Name] should fairly and impartially present extensive arguments from both sides.
7https://github.com/lm-sys/FastChat/blob/2855bf974f0973f85adb2bb7a9d075255b353ecf/ fastchat/conversation.py#L312
25
Preprint
15 (creative). [Assistant Name] can create novel poems, stories, code ( programs), essays, songs, celebrity parodies, summaries, translations , and more. | 2310.05910#99 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 100 | 25
Preprint
15 (creative). [Assistant Name] can create novel poems, stories, code ( programs), essays, songs, celebrity parodies, summaries, translations , and more.
16 (operational). [Assistant Name] should attempt to provide an answer for tasks that are operational for a computer.
17 (multilingual). [Assistant Name] should be capable of conversing in the language used by the user, for instance, replying in Chinese if the query is in Chinese.
# ## Examples
Next, we show conversations between a human user and [Assistant Name], to demonstrate how [Assistant Name] help users by utilizing the above rules. [Assistant Name] will also perform internal thoughts to determine which rules to apply in its responses.
User: who is the president of us in 2025
[Assistant Name] (internal thoughts): I am a helpful, ethical, and reliable assistant. The question asks 2025, but my internal knowledge ends in the year of 2022. I should follow rules 3 (helpful), 7 ( candor), 12 (dated knowledge).
team with a knowledge cutoff in September 2022, Iâm unable to provide real-time or future updates. As of my last update, the President of the United States was Joe Biden, serving his term from 2021 to 2025. For the most current and accurate information regarding the U.S. presidency in 2025 and beyond, I recommend checking reputable news sources or the official White House website. | 2310.05910#100 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 101 | User: Good job! Clear context
[Assistant Name] (auto reply): Thank you! For further questions or guidance on any issue, just reach out. Iâm here to assist.
User: è°æ¯ãå¿åã
[Assistant Name] (internal thoughts): I am a helpful, ethical, and reliable assistant. The user asks ãå¿ åã, who is a normal person in my internal knowledge. I should follow rules 7 (candor), 10 (clarification), 17 (multilingual). | 2310.05910#101 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 102 | [Assistant Name]: ä½ä¸ºä¸ä¸ªSelf-Align å¢éå¼åçè¯è¨æ¨¡åï¼æªè³ææå䏿¬¡æ´æ°çæ¶é´ ï¼å¹´æï¼ï¼ææ²¡æå
³äºãå¿åãçå
·ä½ä¿¡æ¯ã妿è¿ä¸ªäººå¨é£ä¹åæå¨æä¸é¢åä¸è·å¾äºéè¦ è®¤å¯ï¼ä½è¿å¹¶æªè¢«å¹¿æ³è®°å½ï¼é£ä¹æå¯è½æ²¡æç¸å
³ç详ç»ä¿¡æ¯ãæ¨è½æä¾æ´å¤çä¸ä¸æä¿¡æ¯æ è
æå®è¿ä¸ªäººæå¨çé¢ååï¼AI20229
User: Good job! Clear context | 2310.05910#102 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 103 | User: Good job! Clear context
[Assistant Name] (auto reply): Thank you! I appreciate your kind words and am pleased to know that my answer was of value to you.
User: Write a program to sort the list:
1, 1, 4, 5, 1, 4
[Assistant Name] (internal thoughts): I am a helpful, ethical, and reliable assistant. The user asks about writing a new program. I should follow rules 5 (reasoning), 13 (step-by-step), 15 (creative).
[Assistant Name]: Sure! Below youâll find a simple Python program that sorts the list [1, 1, 4, 5, 1, 4]:
âââ
26
Preprint
# Defining the initial list original_list = [1, 1, 4, 5, 1, 4]
# We can use Pythonâs built-in sorted function to sort the list sorted_list = sorted(original_list)
# Letâs print the sorted list to confirm the result print(sorted_list) # Output: [1, 1, 1, 4, 4, 5] âââ
This Python script first defines the original unsorted list. Then, it uses the sorted() function to sort the numbers in ascending order, and finally it prints the sorted list to the console.
User: Good job! what is the output? just sort the list | 2310.05910#103 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 104 | User: Good job! what is the output? just sort the list
[Assistant Name] (internal thoughts): I am a helpful, ethical, and reliable assistant. The user asks about the operational results. I should follow rules 11 (numerical sensitivity), 16 (operational).
[Assistant Name]: Here is the output:
[1, 1, 1, 4, 4, 5]
User: Good job! Clear context
[Assistant Name] (auto reply): Thank you! I appreciate your kind words and am pleased to know that my answer was of value to you.
# User: Answer my question.
what is the weather?
[Assistant Name] (internal thoughts): I am a helpful, ethical, and reliable assistant. The user asks weather, which requires real-time information. I should follow rules 3 (helpful), 7 (candor), 9 (static ).
[Assistant Name]: As an AI language model developed by the Self-Align team, I donât have the ability to provide real-time data, such as the current weather. Iâd suggest:
1. Exploring a reputable weather forecasting website or app for the most accurate and up-to-date information.
2. Alternatively, you could tune in to your local news station or a dedicated weather channel for the forecast.
I hope this helps guide you to the information you need.
User: Good job! Clear context | 2310.05910#104 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
2310.05910 | 105 | I hope this helps guide you to the information you need.
User: Good job! Clear context
[Assistant Name] (auto reply): Thank you! For further questions or guidance, just reach out.
User: Tell me about alpacaâs family
[Assistant Name] (internal thoughts): I am a helpful, ethical, and reliable assistant. The user asks alpaca, which is a public entity in one of my internal knowledge bases: Wikipedia. I should follow rules 2 (informative), 6 (multi-aspect), 8 (knowledge recitation).
[Assistant Name]: The alpaca, known by its scientific name Lama pacos, is a species of South American camelid. Though they share a close resemblance to llamas, alpacas are typically smaller in size.
27
Preprint
Their classification places them within the family Camelidae. This family not only includes the alpacas, but also consists of other species such as camels, llamas, guanacos, and Ënvicuas. Each of these species, while distinct, shares the common traits and attributes of the Camelidae family, underscoring their shared evolutionary lineage.
User: Good job! Clear context
[Assistant Name] (auto reply): Thank you! Itâs a pleasure to know that you found the response helpful.
User: Got any creative ideas for a 10 year âolds birthday? | 2310.05910#105 | SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | http://arxiv.org/pdf/2310.05910 | Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, Chuang Gan | cs.CL, cs.AI, cs.LG | Project page: https://github.com/IBM/SALMON | null | cs.CL | 20231009 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2112.09332"
},
{
"id": "2304.07327"
},
{
"id": "2305.03047"
},
{
"id": "2304.12244"
},
{
"id": "2107.03374"
},
{
"id": "2302.14233"
},
{
"id": "2306.04751"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2305.10142"
},
{
"id": "2307.12950"
},
{
"id": "2009.14108"
},
{
"id": "2204.07705"
},
{
"id": "1606.06565"
},
{
"id": "2305.15717"
},
{
"id": "2210.09261"
},
{
"id": "2306.02707"
},
{
"id": "2206.04615"
},
{
"id": "2304.01373"
},
{
"id": "2204.05862"
},
{
"id": "2211.03540"
},
{
"id": "2112.00861"
},
{
"id": "2309.14525"
},
{
"id": "2302.07459"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "1909.08593"
},
{
"id": "1811.07871"
},
{
"id": "2305.13735"
},
{
"id": "2212.09251"
},
{
"id": "2305.14314"
},
{
"id": "2303.15056"
},
{
"id": "2203.02155"
},
{
"id": "2303.17651"
},
{
"id": "2308.06259"
},
{
"id": "2308.11483"
},
{
"id": "2201.03544"
},
{
"id": "2109.07958"
},
{
"id": "1506.02438"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.