doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.17492 | 52 | ment. However, when the quality of responses is high, as with ChatGPT, adding more responses leads to consistent performance gains. This could potentially offer new insights for the design of fu- ture Human Alignment algorithms.
More diversiï¬ed added responses, better re- sults: We have also discovered that incorpo- rating lower-quality responses may actually im- prove the modelâs results compared to using only high-quality responses. Interestingly, when the sequence length is 4, Ascending (blue line) =Curie+Alpaca surpasses the performance of Alpaca(red line)=Alpaca+Alpaca, even though Curieâs response quality is not as good as Alpacaâs. We believe this is because diverse responses, even if they are negative examples, help the language model become more aware of behaviors that should be avoided, thereby enhancing overall performance. Lastly, by combining Curie, Alpaca, and ChatGPT, we achieve a performance close to using three Chat- GPT responses, demonstrating the truth in the say- ing, "Two heads are better than one."
# 4.8.2 Can self-bootstrapping augmentation enhance performance?
In Figure 3, we present the impact of various expansion strategies on the effectiveness of PRO after expanding sequences of different lengths. Our observations are as follows: | 2306.17492#52 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 53 | In Figure 3, we present the impact of various expansion strategies on the effectiveness of PRO after expanding sequences of different lengths. Our observations are as follows:
Longer Ranking, Better results: Overall, longer ranking sequences generally lead to im- proved performance for most strategies, which is an exciting ï¬nding, as expanding the ranking sequence is a relatively straightforward task compared to de- signing new prompts.
Better added responses, better results: If a single model is used to generate additional re- sponses, supplementing one response is sufï¬cient when the quality is average, such as with Alpaca, adding more responses provides limited improveWe have demonstrated the effectiveness of incor- porating responses from other LLMs to expand ranking length, which signiï¬cantly improves hu- man preference. A natural question arises: Can we further improve the modelâs performance by including responses from the LLM itself in the can- didate list? This can be seen as a special approach to expanding preference ranking sequences. | 2306.17492#53 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 54 | From Table 6, we ï¬nd that self-bootstrapping6 exhibits conï¬icting results. On HH-RLHFraw, self- bootstrapping shows an improvement in BLEU but a slight decrease in reward score. On HH- RLHFAlpaca,3, both BLEU and reward score de6The naive self-bootstrapping makes LLMs easily overï¬t RMtrain. We accordingly regularize it by preventing the aug- mented candidate from taking the position of the originally top 1, and re-ranking all reward to ensure the descending order. | 2306.17492#54 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 55 | crease. However, on HH-RLHFChatGPT,3, self- bootstrapping improves reward score while main- taining BLEU value. We speculate that self- bootstrapping is effective only when the underlying language model is strong. Furthermore, although self-bootstrapping enhances performance on HH- RLHFChatGPT,3, it can be seen as extending the rank- ing sequence to 4, and the improvement may not be as signiï¬cant as adding an additional high-quality response generated by ChatGPT. We also acknowl- edge that these relatively negative results may stem from training a 7B model with a reward model of size 1.4B. Expanding the model size might yield more exciting performance gains, similar to the scaling law of RLHF (Ouyang et al., 2022b; Gao et al., 2022), which we leave for future work.
# 5 Related Work
# 5.1 Reinforcement Learning from Human Feedback | 2306.17492#55 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 56 | Fine-tuning language models to align with human preferences has emerged as a critical research prob- lem. It can be formulated as given a context and corresponding sufï¬xes ranked or scored by hu- man annotators without more detailed labels, the agent is required to learn human preference and provide human-like results. Reinforcement Learn- ing (RL) plays the most straightforward way to reach this goal, for the agent needs just scarce su- pervision signal from reward models as human proxies, and is modiï¬ed through numerous tri- als under RL framework, namely Reinforcement Learning from Human Feedback (RLHF). Many explorations have been done on this path (Chris- tiano et al., 2017; MacGlashan et al., 2017; Warnell et al., 2018; Ziegler et al., 2019; Stiennon et al., 2020b; Nakano et al., 2021; Lee et al., 2021; Lei et al., 2022; Snell et al., 2022; Bai et al., 2022a; Ouyang et al., 2022a). Lei et al. (2022) implement an online RL scenario by establishing a user simu- lator commonly for training and evaluation, which at each session is initialized with | 2306.17492#56 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 57 | Lei et al. (2022) implement an online RL scenario by establishing a user simu- lator commonly for training and evaluation, which at each session is initialized with different Gaus- sian vectors representing diverse personalities. In contrast, ILQL is applicable for the ofï¬ine setting, which is released by Snell et al. (2022). Stiennon et al. (2020b) and Nakano et al. (2021) investigate the RLHF method for text summarization and ques- tion answering, respectively. Bai et al. (2022a) apply RLHF to enable LLMs to become harmless and helpful, while releasing a new conversational dataset with human feedback. Known as a masterpiece, Ouyang et al. (2022a) propose InstructGPT which is ï¬rst ï¬ne-tuned in a supervised way, then continually modiï¬ed under PPO algorithm (Schul- man et al., 2017). This process is cyclic, during which the performance of the trained agent spirals upwards. It is also applied to the famous ChatGPT by OpenAI. | 2306.17492#57 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 58 | # 5.2 Supervised Fine-tuning for Human Preference Alignment
Despite appealing advantages, RL-based methods have obvious limitations regarding training efï¬- ciency and complexity, consequently driving re- searchers to focus on Supervised Fine-tuning meth- ods without these challenges. Liu et al. (2023) combine desirable and undesirable sufï¬xes in a template prompted by opposite keywords, thus fully dependent on a highly semantic understand- ing of large language models. Yuan et al. (2023) compose multiple pairwise comparisons between sufï¬xes in the given ranking, which forms a new algorithm from the perspective of training objec- tives. Rafailov et al. (2023) similarly transform LLMs as a Bradley-Terry model to measure chosen and rejected candidates by human annotators. The proposed PRO chooses the path of modifying the SFT objective, but is further promoted from RLHF formulation and inherits its straightforwardness to- wards Human Preference Alignment. In particular, PRO transforms RLâs indirect optimization into a direct one, and extends pairwise comparisons to multi-dimensional and multi-positional compar- isons. Comprehensive experiments prove its excel- lence in human preference acquisition while main- taining the quality of generated texts.
# 6 Conclusion | 2306.17492#58 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 59 | # 6 Conclusion
In this paper, we derive from the Bradley-Terry comparison of the reward model in RLHF that human alignment can be modeled as aligning the probability ranking of n responses generated by the LLM and the preference ranking of these responses by humans. Based on this derivation, we pro- pose PRO. PRO inherits the advantages of RLHF, and further captures ï¬ne-grained distinction corre- sponding to human preference from multiple one- to-many comparisons. We conduct extensive ex- periments to verify the excellence of PRO against other baselines and investigate the impact of multi- faceted factors. Overall, the ï¬ndings presented in this paper demonstrate the signiï¬cance of PRO in
effectively and efï¬ciently aligning LLMs to human preference. This work can serve as a stepping stone for further quantiï¬able explorations.
# Disclaimer
Since some services provided by OpenAI are cur- rently not available in mainland China, data aug- mentation and inference from ChatGPT, as well as GPT-4 evaluation, are completed where the related services are available.
There exists sensitive and offensive content in HH-RLHF, which aims for only research purposes. Viewpoints included in the data do not represent our attitudes. We hope our work can be used to make AI technologies in line with ethical requirements.
# References | 2306.17492#59 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 60 | # References
Afra Feyza Akyürek, Ekin Akyürek, Aman Madaan, Ashwin Kalyan, Peter Clark, Derry Wijaya, and Rl4f: Generating natu- Niket Tandon. 2023. ral learn- arXiv preprint ing for repairing model outputs. arXiv:2305.08844.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862.
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073.
Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324â 345. | 2306.17492#60 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 61 | Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324â 345.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Jared D Kaplan, Prafulla Dhariwal, Subbiah, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert- Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020a. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877â1901. Curran Associates, Inc. | 2306.17492#61 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 62 | Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020b. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901.
Sébastien Bubeck, Varun Chandrasekaran, Ronen El- dan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lund- berg, et al. 2023. Sparks of artiï¬cial general intelli- gence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. | 2306.17492#62 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 63 | Paul F Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM: General language model pretraining with au- In Proceedings of the toregressive blank inï¬lling. 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 320â335, Dublin, Ireland. Association for Computa- tional Linguistics.
Leo Gao, John Schulman, and Jacob Hilton. 2022. Scaling laws for reward model overoptimization. arXiv preprint arXiv:2210.10760.
Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. 2023. Reward design with language models. arXiv preprint arXiv:2303.00001. | 2306.17492#63 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 64 | Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, et al. 2022. The bigscience roots corpus: A 1.6 tb com- posite multilingual dataset. Advances in Neural In- formation Processing Systems, 35:31809â31826.
Kimin Lee, Laura Smith, and Pieter Abbeel. 2021. Peb- ble: Feedback-efï¬cient interactive reinforcement learning via relabeling experience and unsupervised pre-training. arXiv preprint arXiv:2106.05091.
Wenqiang Lei, Yao Zhang, Feifan Song, Hongru Liang, Jiaxin Mao, Jiancheng Lv, Zhenglu Yang, and Tat- Seng Chua. 2022. Interacting with non-cooperative user: A new paradigm for proactive dialogue policy. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR â22, page 212â222, New York, NY, USA. Association for Computing Machin- ery. | 2306.17492#64 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 65 | Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. Api- bank: A benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244.
Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. 2023. Chain of hindsight aligns language models with feedback. arXiv preprint arXiv:2302.02676.
James MacGlashan, Mark K. Ho, Robert Loftin, Bei Peng, Guan Wang, David L. Roberts, Matthew E. Taylor, and Michael L. Littman. 2017. Interactive learning from policy-dependent human feedback. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Ma- chine Learning Research, pages 2285â2294. PMLR.
Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. 2023. Scaling data-constrained language models. arXiv preprint arXiv:2305.16264. | 2306.17492#65 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 66 | Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question- arXiv preprint answering with human feedback. arXiv:2112.09332.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. arXiv preprint arXiv:1807.03748.
OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022a. Training language models to follow instructions In Advances in Neural In- with human feedback. formation Processing Systems, volume 35, pages 27730â27744. Curran Associates, Inc. | 2306.17492#66 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 67 | Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022b. Training language models to follow instruc- tions with human feedback. Advances in Neural In- formation Processing Systems, 35:27730â27744.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- In Proceedings of uation of machine translation. the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311â318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Gal- ley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Ste- fano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290. | 2306.17492#67 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 68 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Proximal arXiv preprint Radford, and Oleg Klimov. 2017. policy optimization algorithms. arXiv:1707.06347.
Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. 2022. Ofï¬ine rl for natural lan- guage generation with implicit language q learning. arXiv preprint arXiv:2206.11871.
Improved deep metric learning with multi-class n-pair loss objective. Advances in neural information processing systems, 29.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020a. Learn- ing to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â 3021.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020b. Learn- In Ad- ing to summarize with human feedback. vances in Neural Information Processing Systems, volume 33, pages 3008â3021. Curran Associates, Inc. | 2306.17492#68 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 69 | Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford al- paca: An instruction-following llama model. https: //github.com/tatsu-lab/stanford_alpaca.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efï¬- cient foundation language models. arXiv preprint arXiv:2302.13971.
Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023. Large language models are not fair evalu- ators. arXiv preprint arXiv:2305.17926. | 2306.17492#69 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 70 | Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al- isa Liu, Noah A Smith, Daniel Khashabi, and Han- naneh Hajishirzi. 2022. Self-instruct: Aligning lan- guage model with self generated instructions. arXiv preprint arXiv:2212.10560.
Garrett Warnell, Nicholas Waytowich, Vernon Lawh- ern, and Peter Stone. 2018. Deep tamer: Interactive agent shaping in high-dimensional state spaces. In Proceedings of the AAAI Conference on Artiï¬cial In- telligence, volume 32. | 2306.17492#70 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 71 | Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38â45, Online. Asso- ciation for Computational Linguistics.
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, and Hannaneh Hajishirzi. 2023. Fine-grained human feedback gives better rewards arXiv preprint for arXiv:2306.01693.
Wanqi Xue, Bo An, Shuicheng Yan, and Zhongwen Xu. 2023. Reinforcement learning from diverse human preferences. arXiv preprint arXiv:2301.11774. | 2306.17492#71 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 72 | Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with arXiv preprint human feedback without arXiv:2304.05302.
Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, and Joseph E Gonzalez. 2023. The wisdom of hindsight makes language models better instruc- tion followers. arXiv preprint arXiv:2302.05206.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for align- ment. arXiv preprint arXiv:2305.11206. | 2306.17492#72 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.16803 | 0 | 3 2 0 2
t c O 1 3 ] G L . s c [
2 v 3 0 8 6 1 . 6 0 3 2 : v i X r a
# Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
# Alexander Meulemansâ1, Simon Schugâ1, Seijin Kobayashiâ1 Nathaniel D Daw2,3,4, Gregory Wayne2
1Department of Computer Science, ETH Zürich 2Google DeepMind 3Princeton Neuroscience Institute, Princeton University 4Department of Psychology, Princeton University {ameulema, sschug, seijink}@ethz.ch
# Abstract | 2306.16803#0 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16636 | 1 | weitianwen,luanjian,liuwei40,dongshuang1,wangbin11 @xiaomi.com
{
}
# Abstract
We present the Chinese Elementary School Math Word Problems (CMATH) dataset, comprising 1.7k elementary school-level math word problems with detailed anno- tations, source from actual Chinese work- books and exams. This dataset aims to provide a benchmark tool for assessing the following question: to what grade level of elementary school math do the abilities of popular large language models (LLMs) cor- respond? We evaluate a variety of popu- lar LLMs, including both commercial and open-source options, and discover that only GPT-4 achieves success (accuracy 60%) across all six elementary school grades, while other models falter at different grade levels. Furthermore, we assess the robust- ness of several top-performing LLMs by augmenting the original problems in the CMATH dataset with distracting informa- tion. Our findings reveal that GPT-4 is able to maintains robustness, while other model fail. We anticipate that our study will ex- pose limitations in LLMsâ arithmetic and reasoning capabilities, and promote their ongoing development and advancement.
# 1 Introduction | 2306.16636#1 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 1 | # Abstract
To make reinforcement learning more sample efficient, we need better credit assign- ment methods that measure an actionâs influence on future rewards. Building upon Hindsight Credit Assignment (HCA) [1], we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: âWould the agent still have reached this reward if it had taken another action?â. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.2
# Introduction | 2306.16803#1 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 1 | Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Speciï¬cally, we ï¬rst use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi- modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA modelâs capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT- 4-based instruction-following evaluation also demonstrates the improvement of our model on both | 2306.17107#1 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 2 | # 1 Introduction
Recently, the field of artificial intelligence has witnessed groundbreaking advancements, par- ticularly in the development of large language models (LLMs). Pioneering models such as ChatGPT (Ouyang et al., 2022) along with (Taylor et al., 2022) have demonstrated impress- ing capabilities in understanding and generat- ing natural language text across a multitude of tasks. The recently released GPT-4 (OpenAI, 2023; Bubeck et al., 2023) model exhibits a sweeping range of skills, arguably far exceeding those of its predecessors and contemporaries. Its superior capabilities have unlocked new po- tential for application, not only in commercial settings but also in various scientific domains.
Mathematics, a core scientific discipline, rep- resents a key area where the potential of LLMs can be harnessed. The ability to process, un- derstand, and solve mathematical problems is a highly desirable trait for these models. This mathematical competence can lead to a myr- iad of applications, from providing assistance in educational contexts to facilitating complex computations in various sectors. | 2306.16636#2 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 2 | # Introduction
Reinforcement learning (RL) faces two central challenges: exploration and credit assignment [2]. We need to explore to discover rewards and we need to reinforce the actions that are instrumental for obtaining these rewards. Here, we focus on the credit assignment problem and the intimately linked problem of estimating policy gradients. For long time horizons, obtaining the latter is notoriously difficult as it requires measuring how each action influences expected subsequent rewards. As the number of possible trajectories grows exponentially with time, future rewards come with a considerable variance stemming from stochasticity in the environment itself and from the stochasticity of interdependent future actions leading to vastly different returns [3â5].
Monte Carlo estimators such as REINFORCE [6] therefore suffer from high variance, even after variance reduction techniques like subtracting a baseline [6â9]. Similarly, in Temporal Difference methods such as Q-learning, this high variance in future rewards results in a high bias in the value estimates, requiring exponentially many updates to correct for it [5]. Thus, a common technique to
âEqual contribution; ordering determined by coin flip. 2Code available at https://github.com/seijin-kobayashi/cocoa
37th Conference on Neural Information Processing Systems (NeurIPS 2023). | 2306.16803#2 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 2 | an accuracy of 91.42% on ScienceQA. The GPT- 4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/. | 2306.17107#2 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 3 | However, effectively evaluating the mathe- matical abilities of LLMs remains a non-trivial endeavor. Although several datasets have been developed for this purpose, they exhibit notable limitations. Firstly, most existing math-related datasets are in English (Cobbe et al., 2021; Amini et al., 2019; Hendrycks et al., 2021b), making them unsuitable for evaluating Chi- nese language models. Secondly, many of these datasets present problems that are excessively difficult, e.g. college-level maths (Hendrycks et al., 2021b,a), making them inappropriate for guiding the development of smaller language models. From our perspective, the most criti- cal shortcoming is that the evaluation results derived from these datasets often lack intuitive clarity, making them challenging for the gen- eral public to comprehend. For instance, what does it truly mean when a model scores 35.7 on GSM8K (Cobbe et al., 2021)? How can we interpret this score in terms of the modelâs mathematical competency? | 2306.16636#3 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 3 | 37th Conference on Neural Information Processing Systems (NeurIPS 2023).
A Given a trajectory T= (. ) â*â COCOA-reward COCOA-feature 9 â®â HCA+ == HCA-return St, Ot, Te)t>0 â ee Q-critic â*â Advantage =e REINFORCE â© TrajCv % a1 Naz ep oN t= tr a2 Update the policy to increase the probability of actions that could have contributed to reaching reward Azr(a, s2) oRo(83)@)r3) â73 Both action a; anda allow to reach rs from state 52 20 40 60 80 100 20 40 60 80 100 Average fraction of treasure collected °° Ss & SNR dB rs & Credit assignment distance Credit assignment distance | 2306.16803#3 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 3 | # Introduction
Instruction tuning [1, 2] improves generalization to unseen tasks by formulating various tasks into instructions. Such open-ended question-answering capability fosters the recent chatbot boom since ChatGPT 2. Recently, visual instruction-tuned models [3â5] further augment conversation agents with visual encoders such as CLIP-ViT [6, 7], enabling human-agent interaction based on images. However, possibly due to the dominance of natural images in training data (e.g., Conceptual Captions [8] and COCO [9]), they struggle with understanding texts within images [10]. However, textual understanding is integral to humansâ daily visual perception.
Fortunately, recognizing texts from images is accessible based on OCR tools. One naive way to utilize this is adding recognized texts to the input of visual instruction-tuned models [11], which increases the computation (longer context lengths) without fully leveraging the encoding capability of visual encoders. To this end, we propose to enhance the visual instruction-tuned model end-to-end by collecting instruction-following data that requires an understanding of texts within images.
# âCollaborations through Adobe University Gift Program. 2https://openai.com/chatgpt
Preprint. Work in progress. | 2306.17107#3 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 4 | We posit that the evaluation of LLMs should mirror that of human learners, which would allow us to convey results in a manner that is more intuitive and accessible. In pursuit of this human-centric evaluation, we introduce in this work the Chinese Elementary School Math Word Problems (CMATH) dataset, consisting of 1.7k elementary school-level math word prob- lems sourced from actual Chinese workbooks and exams. Each problem in CMATH is annotated with grade information, enabling us to provide fine-grained evaluations akin to âChat- GPT scored 70 out of 100 in a fourth-grade math examâ.
On our CMATH dataset, we conduct evalu- ation for a variety of popular LLMs, accessible via commercial API or released model weights. We discover that GPT-4 is the only model that achieves success (accuracy 60%) across all six elementary school grades. We also examine the robustness of LLMs against the distracting information. It turns out that GPT-4 is again the sole model that maintains robustness, while other models are easily misled by the presence of distracting information.
# 2 CMATH dataset
# 2.1 Motivation
This work is motivated by the following ques- tion:
To what grade level of elementary school math do the abilities of popu- lar LLMs correspond? | 2306.16636#4 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 4 | Figure 1: Counterfactual Contribution Analysis enables long-term credit assignment. (A) Given a sample trajectory that eventually results in a rewarding outcome, we estimate the policy gradient by considering the contribution of actions along the trajectory towards arriving at a rewarding outcome. In this example, we measure how much more likely the rewarding outcome with reward r3 is when following action a1 versus the counterfactual actions a2 and a3 in state s2. This is quantified through the contribution coefficient w(s2, a1, r3) which is used to update all possible action probabilities of the policy Ï(a | s2). (B) In the linear key-to-door environment increasing the distance between picking up the key and opening the door that leads to reward necessitates credit assignment over increasing time spans. COCOA consistently achieves good performance (left) compared to HCA and baselines which deteriorate when increasing the distance between an action and the resulting rewarding outcome. This is reflected in a higher signal-to-noise ratio of the policy gradient estimator of COCOA compared to baselines (right).
reduce variance and bias is to discount rewards that are far away in time resulting in a biased estimator which ignores long-term dependencies [10â13]. Impressive results have nevertheless been achieved in complex environments [14â16] at the cost of requiring billions of environment interactions, making these approaches sample inefficient. | 2306.16803#4 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 4 | # âCollaborations through Adobe University Gift Program. 2https://openai.com/chatgpt
Preprint. Work in progress.
OCRI1: Peep- Through Book Lets Go Under the Seal Petr Horacek OCR2: A Peep-Through Book Let's Go Under the Sea! Petr Horacek Image Captioning: a picture of a yellow submarine with a boy in it Text-Only GPT-4 Question: What is the name of the book mentioned in the image? Answer: The name of the book mentioned in the image is "Let's Go Under the Sea" by Petr Horacek. Question: What type of book is it? Answer: It is a "Peep Through Book," which usually features die-cut holes to encourage interaction and exploration, making the reading experience more engaging for children. Question: Can you describe the scene shown in the image? Answer: The scene shown in the image features a yellow submarine with a boy in it, which correlates with the theme of the book, "Let's Go Under the Sea." ea
Figure 1: The process of collecting high-quality instruction-following data. | 2306.17107#4 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 5 | # 2.1 Motivation
This work is motivated by the following ques- tion:
To what grade level of elementary school math do the abilities of popu- lar LLMs correspond?
We create the CMATH dataset in order to answer this question. We believe that the eval- uation results of LLMs should be presented in an intuitive manner, making them easily understandable for the general public.
We are particularly interested in elemen- tary school level math word problems, as these problems, compared to high school or college level counterparts, provide a more appropriate evaluation of LLMsâ general-purpose reasoning and arithmetic capabilities. Elementary school math problems are more fundamental and, as a result, the skills required for solving them are more transferable to other domains. By assessing LLMs on these problems, we can gain valuable insights into their ability to general- ize and adapt to new tasks. Furthermore, the relatively simple nature of elementary school problems enhances their interpretability. It becomes easier to comprehend why an LLM succeeds or fails at solving these basic prob- lems, allowing for a more transparent analysis of the underlying reasoning processes.
# 2.2 Data Collection
We collect the math word problems from Chi- nese elementary school exercise books and ex- ams that are freely available on the internet. | 2306.16636#5 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 5 | Especially in settings where obtaining such large quantities of data is costly or simply not possible, model-based RL that aims to simulate the dynamics of the environment is a promising alternative. While learning such a world model is a difficult problem by itself, when successful it can be used to generate a large quantity of synthetic environment interactions. Typically, this synthetic data is combined with model-free methods to improve the action policy [17â19]. A notable exception to simply using world models to generate more data are the Stochastic Value Gradient method [20] and the closely related Dreamer algorithms [21â23]. These methods perform credit assignment by backpropagating policy gradients through the world model. Crucially, this approach only works for environments with a continuous state-action space, as otherwise sensitivities of the value with respect to past actions are undefined [20, 22, 24]. Intuitively, we cannot compute sensitivities of discrete choices such as a yes / no decision as the agent cannot decide âyesâ a little bit more or less. | 2306.16803#5 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 5 | Figure 1: The process of collecting high-quality instruction-following data.
Speciï¬cally, we ï¬rst collect 422K noisy instruction-following data using text-rich3 images by com- bining manually written instructions (e.g., âIdentify any text visible in the image provided.â) and the OCR results. Such large-scale noisy-aligned data effectively improve the feature alignment between the visual features and the language decoder. Furthermore, we prompt text-only GPT-4 [12] with OCR results and image captions to generate 16K conversations, where each conversation can be multiple turns of question&answer pairs, as high-quality instruction-following examples. This process requires GPT-4 to denoise the OCR results and develop speciï¬c questions to create complex instructions based on the input (Figure 1). | 2306.17107#5 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 6 | # 2.2 Data Collection
We collect the math word problems from Chi- nese elementary school exercise books and ex- ams that are freely available on the internet.
grade size length steps digits 1 2 3 4 5 6 254 353 337 220 227 298 33.6 35.5 42.1 47.0 48.9 52.5 1.3 1.6 1.9 2.1 2.7 3.0 1.9 2.0 2.8 3.3 2.7 3.2
Table 1: Statistics of the CMATH dataset. The column titled âlengthâ denotes the average problem length in terms of the number of characters. The column titled âstepsâ denotes the average reasoning steps required to solve the problem. The column titled âdigitsâ stands for the average number of digits involved in the problem solution.
The original data comes in as either PDF or Microsoft Word format, which is subsequently converted, preferably automatically, otherwise manually by human annotators into pure text. As we are only interested in text-based math word problems, we discard all problems origi- nally equipped with graphic content. All ques- tions also go through the standard data pre- processing pipeline, such as deduplication and cleaning. Following this, the questions undergo several rounds of human validation by the au- thors.
# 2.3 Data annotation | 2306.16636#6 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 6 | Building upon Hindsight Credit Assignment (HCA) [1], we develop Counterfactual Contribution Analysis (COCOA), a family of algorithms that use models for credit assignment compatible with discrete actions. We measure the contribution of an action upon subsequent rewards by asking a counterfactual question: âwould the agent still have achieved the rewarding outcome, if it had taken another action?â (c.f. Fig. 1A). We show that measuring contributions towards achieving a future state, as is proposed in HCA, leads to spurious contributions that do not reflect a contribution towards a reward. This causes HCA to degrade towards the high-variance REINFORCE method in most environments. Instead, we propose to measure contributions directly on rewarding outcomes and we develop various new ways of learning these contributions from observations. The resulting algorithm differs from value-based methods in that it measures the contribution of an action to individual rewards, instead of estimating the full expected sum of rewards. This crucial difference allows our contribution analysis to disentangle different tasks and ignore uncontrollable environment influences, leading to a gradient estimator capable of long-term credit assignment | 2306.16803#6 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 6 | To evaluate the effectiveness of collected data, we use noisy and high-quality examples to augment the pretraining and ï¬netuning stages of LLaVA accordingly. We name our model LLaVAR, signifying the LLaVA (Large Language and Vision Assistant) that can Read. Compared to the original LLaVA, we also experiment with scaling the input resolution from 2242 to 3362 to encode small textual details better. Empirically, we report the results on four text-based VQA datasets following the evaluation protocol from [10] together with the ï¬netuning results on ScienceQA. Moreover, we apply GPT-4-based instruction-following evaluation on 30 natural images from COCO [9, 3] and 50 text-rich images from LAION [13]. Furthermore, we also provide the qualitative analysis (e.g., on posters, website screenshots, and tweets) to test more complex instruction-following skills. To sum up, our contributions are:
⢠We collect 422K noisy instruction-following data and 16K high-quality instruction-following data. Both are shown to be effective in augmenting visual instruction tuning.
⢠Our model, LLaVAR, signiï¬cantly enhances text understanding within images while slightly improving the modelâs performance on natural images. | 2306.17107#6 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 7 | # 2.3 Data annotation
We provide annotations for the collected prob- lems, including grade, answer, number of rea- soning steps and number of digits. Examples can be found in Table 1.
# 2.3.1 Grade
We annotate the elementary school grade to which each collected math word problem be- longs. This information can be used to cre- ate subsets of problems specific to a particu- lar grade, enabling more targeted, fine-grained evaluation results.
# 2.3.2 Answer
We annotate the ground truth answer for each problem. Annotated answers are standalone numerals that fall into one of the following categories: integer, decimal number, fraction, or percentage. We do not provide the reasoning process leading to the answer, as our dataset is intended for test-only purposes. | 2306.16636#7 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 7 | allows our contribution analysis to disentangle different tasks and ignore uncontrollable environment influences, leading to a gradient estimator capable of long-term credit assignment (c.f. Fig. 1B). We introduce a new method for analyzing policy gradient estimators which uses dynamic programming to allow comparing to ground-truth policy gradients. We leverage this to perform a detailed bias- variance analysis of all proposed methods and baselines showing that our new model-based credit assignment algorithms achieve low variance and bias, translating into improved performance (c.f. Fig. 1C). | 2306.16803#7 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 7 | ⢠Our model, LLaVAR, signiï¬cantly enhances text understanding within images while slightly improving the modelâs performance on natural images.
⢠The enhanced capability enables our model to provide end-to-end interactions based on various forms of online content that combine text and images.
We open-source the training and evaluation data together with the model checkpoints.
# 2 Related Work
Instruction Tuning Following natural language instructions is the key capability for an agent to interact with real-world users. Instruction tuning starts from collecting human-preferred feedback for human written instructions [1] or formulating multi-task training in a multi-task instruction-following manner [2, 14]. However, large, capable instruction-tuned models are usually close-sourced and serve
3In this work, we use the phrase âtext-rich imagesâ to describe images that have text in them. In contrast, we refer to images without text as ânatural images.â
2 | 2306.17107#7 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 8 | Grade Problem in Chinese English translation Answer | #Steps | #Digits 1 ELA OD AEH, KA | There are 9 cakes in the store. If 4 1 1 5A, LA By A? 5 are sold, how many are leftâ? ARALAA SA, A | oH hus 15 more got om at the 2 GERIS A, RFEALK. > us, Bo More Gor on At ue 43 2 2 WEDREE LBS DAD next stop, and 4 got off. How ~ . many people are on the bus now? A pair of gloves KW LFS 12H and a hat co: 3 WAF 35.7 7-MW. store. How much does 48.1 1 3 Bho TH F-HBS VR? buy a pair of gloves and a hat together? RAF LRA 6 RBS A box contains crickets with 6 nl fo 8 RBA HosR, EATSE legs and spiders with 8 legs. 3 - 9 A 66 RRP. Fi-F BASH | Together they have 66 legs. How . ° BWR? many crickets are in the box? A bamboo pole is inserted into AA Fa AK YP, AKSB | the water, with 5/14 meters of it . PKS/14 RK, ARES submerged, 1/14 meters in the 5/4 | 2306.16636#8 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 8 | 2
# 2 Background and notation
We consider an undiscounted Markov decision process (MDP) defined as the tuple (S, A, p, pr), with S the state space, A the action space, p(St+1 | St, At) the state-transition distribution and pr(R | S, A) the reward distribution with bounded reward values r. We use capital letters for random variables and lowercase letters for the values they take. The policy Ï(A | S), parameterized by θ, denotes the probability of taking action A at state S. We consider an undiscounted infinite-horizon setting with a zero-reward absorbing state sâ that the agent eventually reaches: limtââ p(St = sâ) = 1. Both the discounted and episodic RL settings are special cases of this setting. (c.f. App. B), and hence all theoretical results proposed in this work can be readily applied to both (c.f. App J). | 2306.16803#8 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 8 | Quote & Meme ll. Game Cover | Educational Material lm Poster lM Ad & Product Packaging âlll. Logo | Book Cover EE Infographic Other
Quote & Meme ll. Game Cover | Educational Material lm Poster lM Ad & Product Packaging âlll. Logo | Book Cover EE Infographic Other
Figure 2: CLIP-based categorization of our collected images. The left one refers to images used to collect noisy data, and the right one refers to images used in GPT-4 prompting. Both pie charts are based on 10K sampled images from the corresponding datasets.
as commercial APIs only. Recently, Alpaca [15, 16], Vicuna [17], and Baize [18] start the trend of generating high-quality instruction-following data based on LLMs such as GPT-3.5/ChatGPT/GPT-4 and ï¬netuning the open-sourced LLaMA model [19]. However, the evaluation of instruction-following capability remains challenging. While GPT-4 has demonstrated superior evaluation capabilities [20], it still has apparent drawbacks, including biases toward response length[18] and lack of robustness to the order or examples [21]. Following [17, 3, 22], we use GPT-4-based instruction-following evaluation in this work. | 2306.17107#8 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 9 | YP, AKSB | the water, with 5/14 meters of it . PKS/14 RK, ARES submerged, 1/14 meters in the 5/4 9 3 ° 1/14 A, 3x8 Ki 3/14 #. | mud, and 3/14 meters exposed ° * . x rR? above the water surface. How long is the bamboo pole in total? Teacher Zhang deposits 4500 IKE Ip Sl AR AT HH 4500 A, yuan in the bank at an annual 6 AA) RR 2.25%, doe 20% interest rate of 2.25%. After 4581 4 4 ° A) 2.4L, â#ERAABS | deducting 20% interest tax, how * YH? much will he get back in one year? | 2306.16636#9 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 9 | We use T(s, 7) and T(s, a, 7) as the distribution over trajectories T = (Se A;, Ry)e>0 Starting from So = sand (So, Ap) = (s, a) respectively, and define the return Z, = )77°) R,. The value function V7(s) = Erxr(s,x) [21] and action value function Q"(s,a) = Ep.t(s,a,n) [Zr] are the expected return when starting from state s, or state s and action a respectively. Note that these infinite sums have finite values due to the absorbing zero-reward state (c.f. App. B). The objective of reinforcement learning is to maximize the expected return Vâ (so), where we assume the agent starts from a fixed state so. Policy gradient algorithms optimize V(so) by repeatedly estimating its gradient VV (sq) w.r.t. the policy parameters. REINFORCE [6] (c.f. Tab. 1) is the canonical policy gradient estimator, however, it has a high variance resulting in poor parameter updates. Common techniques to reduce the variance are (i) subtracting a baseline, typically | 2306.16803#9 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 9 | Multimodal Instruction Tuning Recently, instruction tuning has been expanded to the multimodal setting, including image, video [23, 24], and audio [25, 26]. In particular, for image-based instruction tuning, MiniGPT4 [27] employs ChatGPT to curate and improve the detailed captions for high- quality instruction-following data. LLaVA [3] and generates multimodal instruction-following data by prompting text-only GPT-4 with captions and objectâs bounding boxes. LLaMA-Adapter [28, 11] uses COCO data for text-image feature alignment and utilizes textual data only for instruction tuning. mPLUG-owl [29] combines more than 1000M image-text pairs for pretraining and a 400K mixture of text-only/multimodal instruction-following data for ï¬ne-tuning. However, according to [10], most of these models struggle with accomplishing tasks that require OCR capability. InstructBLIP [30] transforms 13 vision-language tasks (including OCR-VQA [31]) into the instruction-following format for instruction tuning. Cream [32] applies multi-task learning that includes predicting | 2306.17107#9 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 10 | Figure 1: Sample problems along with their English translations (not part of the dataset) and human annotations. The column title â#Stepsâ and â#Digitsâ stand for ânumber of reasoning stepsâ and ânumber of digitsâ respectively.
# 2.3.3 Number of Reasoning Steps
# 2.3.4 Number of Digits
For each problem, we manually annotate the number of reasoning steps required to solve it. This quantity is straightforward for the ma- jority of problems, where human annotators can easily reach consensus (e.g., examples in Table 1 except for the one from grade 4). We acknowledge that, in a few cases, the number of steps may vary depending on the specific solution one considers (as with the problem of grade 4 in Table 1). However, this ambiguity should not pose a serious issue, as it only ac- counts for a small fraction of problems. We use the number of reasoning steps as a suitable proxy for a problemâs reasoning complexity, which relates to the level of logical analysis and problem-solving strategies needed for an LLM to arrive at the correct solution. Gen- erally, more reasoning steps correspond to a more intricate thought process and potentially more opportunities for an LLM to make errors or lose track of the problemâs structure. | 2306.16636#10 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 10 | however, it has a high variance resulting in poor parameter updates. Common techniques to reduce the variance are (i) subtracting a baseline, typically a value estimate, from the sum of future rewards [2, 25] (c.f. âAdvantageâ in Tab. 1); (ii) replacing the sum of future rewards with a learned action value function Q [2, 3, 25, 26] (c.f. âQ-criticâ in Tab. 1); and (iii) using temporal discounting. Note that instead of using a discounted formulation of MDPs, we treat the discount factor as a variance reduction technique in the undiscounted problem [10, 11, 13] as this more accurately reflects its practical use [4, 27]. Rearranging the summations of REINFORCE with discounting lets us interpret temporal discounting as a credit assignment heuristic, where for each reward, past actions are reinforced proportional to their proximity in time. (7REINFORCE, IV" Ve | 2306.16803#10 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16636 | 11 | Each math word problem is associated with several numbers in the problem statement. For a given problem P , we denote the set of asso- ciated numbers by . LLMs are expected to perform a number of arithmetic operations on to derive the final numerical answer a. As a rough measure of the arithmetic complexity of P , we consider
# len(x), D = max {
x a , (1)
â
â N ⪠{
}}
where len(x) returns the number of digits1 in the string representation of x. In the following sections, we simply refer to D as the number of digits of P . This quantity is a practical and eas- ily quantifiable measure of the computational demands placed on an LLM when tackling a problem.
We developed a simple program to automat- ically compute D for each problem.
1Only digits 0 â¼ 9 are counted. Other symbols, such as decimal points, percentage symbols, and slashes, are not taken into account.
# 3 Experimental Setup
Model Parameters Access GPT-4 ChatGPT - - API API Chinese-Alpaca Moss Ziya-LLaMA-13B Baichuan-7B RWKV-7B ChatGLM-6B 33B/13B Weights Weights Weights Weights Weights Weights 16B 13B 7B 7B 6B
Table 2: Language models evaluated in this work.
# 3.1 Models | 2306.16636#11 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 11 | ËâREINFORCE,γ V Ï(s0) = γtâkâθ log Ï(Ak | Sk), Rt γ â [0, 1]. θ tâ¥0 (1)
kâ¤t Crucially, long-term dependencies between actions and rewards are exponentially suppressed, thereby reducing variance at the cost of disabling long-term credit assignment [4, 28]. The aim of this work is to replace the heuristic of time discounting by principled contribution coefficients quantifying how much an action contributed towards achieving a reward, and thereby introducing new policy gradient estimators with reduced variance, without jeopardizing long-term credit assignment.
HCA [1] makes an important step in this direction by introducing a new gradient estimator:
GHCAY = y- y- Vor(a| 51)(r(S1.0) ; 3 p'(Ar =a| Si =, 8" SH) 4) Q) t>0aeEA k>1 m(a| St) | 2306.16803#11 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 11 | # 3 Data Collection
Starting from the LAION-5B [13] dataset 4, our goal is only to keep images that are text-rich. Considering documents usually contain plenty of text, we ï¬rst obtained a binary classiï¬cation dataset by combining natural images and document data. Subsequently, we trained an image classiï¬er using a DiT [33] base backbone, which was ï¬ne-tuned on the RVL-CDIP dataset [34]. Hopefully, such a classiï¬er can predict whether an image contains text or not. We ï¬rst build a subset by selecting images with a predicted probability greater than 0.8 while also satisfying p(watermark) < 0.8 and p(unsafe) < 0.5 5. The derived subset is noisy due to the limitation of the classiï¬er. To further
4https://huggingface.co/datasets/laion/laion-high-resolution 5Both probabilities are from the LAION datasetâs metadata.
3
Pretraining: Finetuning: < Deve > < Pave > Tes <res,. Tes <res,. D D <img ,>...<img_,> <ins,>...<ins,> <img,>...<img > <ins,>...<ins,> w Vv Vv | 2306.17107#11 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 12 | Table 2: Language models evaluated in this work.
# 3.1 Models
We consider a variety of popular LLMs that are able to process text in Chinese and are fine-tuned to be general-purpose task solver. Those LLMs, being developed by diverse orga- nizations, vary in size and can be accessed via either API or model weights as summarized in Table 2.
⢠GPT-4 (OpenAI, 2023) is OpenAIâs newest generation of LLM. It is arguably the most powerful LLM as of the time of writing this manuscript (June 2023) and is considered as the first artificial general intelligence (AGI Bubeck et al. (2023)). However, the technical details are not disclosed.
⢠ChatGPT is the predecessor of GPT4. It is based on InstructGPT (Ouyang et al., 2022), which has undergone instruction tuning and reinforcement learning from human feedback. The version of ChatGPT evaluated in this work is identified as âgpt-3.5-turboâ in OpenAIâs API. The technical details of this model are not disclosed. | 2306.16636#12 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 12 | with r(s, a) a reward model, and the hindsight ratio pÏ(a|St=s,Sâ²=St+k) measuring how important action a was to reach the state Sâ² at some point in the future. Although the hindsight ratio delivers precise credit assignment w.r.t. reaching future states, it has a failure mode of practical importance, creating the need for an updated theory of model-based credit assignment which we will detail in the next section.
# 3 Counterfactual Contribution Analysis
To formalize the âcontributionâ of an action upon subsequent rewards, we generalize the theory of HCA [1] to measure contributions on rewarding outcomes instead of states. We introduce unbiased policy gradient estimators that use these contribution measures, and show that HCA suffers from high variance, making the generalization towards rewarding outcomes crucial for obtaining low-variance estimators. Finally, we show how we can estimate contributions using observational data.
3.1 Counterfactual contribution coefficients To assess the contribution of actions towards rewarding outcomes, we propose to use counterfactual reasoning: âhow does taking action a influence the probability of obtaining a rewarding outcome, compared to taking alternative actions aâ²?â.
3
# Table 1: Comparison of policy gradient estimators. | 2306.16803#12 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 12 | Figure 3: The model training process for visual encoder V , projection matrix W , and language decoder D. Blue blocks denote frozen modules, and yellow blocks denote trainable modules. The training input is image tokens (<img>) and instruction tokens (<ins>), while the target is response tokens (<res>).
Data Image Instruction # Conv Avg Ins Len Avg Res Len LLaVA pretraining Rpretraining LLaVA ï¬netuning Rï¬netuning CC3M LAION PaddleOCR COCO LAION CC3M GPT-4 GPT-4 595K 422K 158K 16K 15.9 17.2 15.9 15.1 15.4 48.8 93.1 40.5
Table 1: Summary of data statistics. Rpretraining and Rï¬netuning denote the extra pretraining/ï¬netuning data we collected. Average instruction and response length are calculated after LLaMA tokenization. | 2306.17107#12 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 13 | ⢠MOSS (Sun and Qiu, 2023) is an open source LLM with 16B parameters based on Code- Gen (Nijkamp et al., 2023). It is further pre-trained on 100B Chinese tokens and 20B English Tokens, then fine-tuned on 110M multi-turn conversational data.
⢠Ziya-LLaMA-13B (IDEA-CCNL, 2023) is based on LLaMA-13B, where the original vocabulary is extended with 7k Chinese char- acters, and the checkpoint is further pretrained on 110B tokens of Chinese text. Af- ter the continual pre-training, Ziya-LLaMA- 13B has also undergone RLHF training as in (Ouyang et al., 2022).
⢠Chinese-Alpaca (Cui et al., 2023) is also based on LLaMA series, extended with Chi- nese vocabulary. The model is has under- gone supervised instruction tuning on Chi- nese datasets with Low Rank Adaptation (LoRA) technique (Hu et al., 2021). In this work we evaluate 13B and 33B versions of Chinese-Alpaca. | 2306.16636#13 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 13 | 3
# Table 1: Comparison of policy gradient estimators.
Method Policy gradient estimator (VoV"(so)) REINFORCE âY>)5. Vo log 7(Ar | 91) ps0 Reve Advantage Viso Vo log m(Az | St) (Xiso Rise - V(S:)) Qecritic Viso Daca Vor (a | S1)Q(S;, a) HCA-Return Vso Vo log 7(At nt - ees) Zi TrajCV Liso Vo log (At | St) (Ze â Q(At, St) â Mss (Q(Se, Atâ) â V(Sv)) +... Vaca Vor (a | $1) Q(St, a) COCOA Deso Vo log (At | $1)Ri + aca Vor(a | $1) Dest w(S:, a, Uren) Rivk HCA+ Vso Vo log (At | $1)Ri + aca Vor(a@ | $1) Vest w(Sr, a, Si4¢) Rizk
Definition 1 (Rewarding outcome). A rewarding outcome U â² â¼ p(U â² | sâ², aâ², râ²) is a probabilistic encoding of the state-action-reward triplet. | 2306.16803#13 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 13 | clean up the data and incorporate human judgment, we randomly sampled 50K images and clustered them into 100 clusters based on CLIP-ViT-B/32 visual features. After inspecting the clustering results, we carefully select 14 clusters (See Figure 8 in Appendix for examples.) containing diverse text-rich images ranging from posters, covers, advertisements, infographics, educational materials, and logos. As a reference, we provide a CLIP [7]-based categorization (See Appendix A for details.) to illustrate the distribution of used images for both two types of data we collected in Figure 2. We also summarize and compare our collected data with LLaVAâs data in Table 1. | 2306.17107#13 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 14 | ⢠RWKV-7B (Peng et al., 2023) is an RNN- Transformer hybrid model with 7B parame- ters. The model is pre-trained on both En- glish and Chinese texts, and is fine-tuned on open source instruction-tuning datasets. More information can be found in (Peng, 2023).
⢠Baichuan-7B (Baichuan inc, 2023) is a LLaMA-like LLM pre-trained from scratch on 1.2T Chinese and English tokens. Al- though it is merely a foundation model, in preliminary experiments we find out that it is able to solve math word problems in a zero- shot manner. Therefore we also evaluate its performance in this work.
⢠ChatGLM-6B (THUDM, 2023a) and its suc- cessor ChatGLM2-6B (THUDM, 2023b) fea- ture a modified encoder-decoder transformer architecture (Du et al., 2022; Zeng et al., 2022). The two models are pre-trained on English and Chinese data and undergoes su- pervised instruction tuning.
# 3.2 Evaluation Procedure
# 3.2.1 Zero-shot Evaluation | 2306.16636#14 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.17107 | 14 | Noisy Instruction-following Data Using the clustering model as the classiï¬er, we collect 422K images that belong to the 14 preferred clusters. To balance the examples from different categories, we keep at most 52K examples for one cluster. We run all images through PaddleOCR 6. Note that running OCR on the original resolution (e.g.,10242) might recognize small fonts that are not visible by visual encoders like CLIP ViT [6, 7] (up to 3362). To ensure the recognition of visible fonts while maintaining OCR accuracy, we perform OCR on the resized image (the short edge is resized to 384 pixels) to extract the text. Then, based on the geometric relationships between the recognized words, we apply speciï¬c rules7 to merge the words and obtain a text paragraph. As a robust instruction-following model should react similarly to instructions with similar meanings, we reword âIdentify any text visible in the image provided.â into ten distinct instructions (Table 7 in Appendix). We then create a single-turn conversation for a given image by (i) randomly sampling an input instruction and (ii) using the recognized texts as the desired output response. Such instruction- following data is noisy due to the relatively limited performance of OCR tools on diverse fonts and colorful backgrounds. | 2306.17107#14 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 15 | # 3.2 Evaluation Procedure
# 3.2.1 Zero-shot Evaluation
Throughout our experiments we employ the zero-shot evaluation method, eschewing any form of supplementary prompting. This entails presenting the problem statement in its original form to the LLM to obtain the model response. We deliberately forgo prompting-based evalua- tion approaches, such as few-shot or chain-of- thought (CoT, Wei et al. (2023)) evaluations, as the LLMs examined in this study are all fine- tuned models intended for direct deployment in real-world applications. We posit that zero- shot evaluation furnishes a more accurate and pragmatic assessment of model performance.
# 3.2.2 Automated Evaluation
Given an input of math word problem, a typical LLM generated response encompasses several paragraphs detailing reasoning steps culminat- ing in a final answer. To ascertain the cor- rectness of the modelâs answer, we employ a regular expression-based script to extract all numerals within the response that are likely to constitute the concluded answer. We then compare these extracted numerals with the annotated ground truth answer, deeming the LLMâs solution correct if a numerical match is identified between the ground truth and any of the extracted figures. | 2306.16636#15 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 15 | From a given state, we compare the probability of reaching the rewarding outcome uâ at any sub- sequent point in time, given we take action a versus taking counterfactual actions according to the policy 7, as pâ (Wiz, = uwâ | Sp = 8s) =O, 7(aâ | s)p" ign =uâ | Sp = 8, Ay = aâ). Subtracting this ratio by one results in an intuitive interpretation of the contribution coefficient w(s, a, uâ): if the coefficient is positive/negative, performing action a results in a higher/lower probability of obtaining rewarding outcomes wuâ, compared to following the policy 7. Using Bayesâ rule, we can convert the counterfactual formulation of the contribution coefficients into an equivalent hindsight formulation (right-hand side of Eq. 3), where the hindsight distribution p"(A, = a | S; = s,Uâ = wâ) reflects the probability of taking action a in state s, given that we encounter the rewarding outcome wuâ at some future point in time. We refer the reader to App. C for a full derivation. Choice of rewarding outcome. | 2306.16803#15 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 15 | GPT-4-based Instruction-following Data Compared to high-quality instruction-following data, there are mainly two issues for the noisy data collected above. (i) The responses should contain organized sentences instead of raw OCR results with missing words and grammar errors. (ii) The instructions should be diverse, suitable, and speciï¬c to the given image instead of monotonously
# 6https://github.com/PaddlePaddle/PaddleOCR 7https://github.com/JaidedAI/EasyOCR/blob/f454d5a85d4a57bb17082c788084ccc64f1f7397/
easyocr/utils.py#L643-L709
4 | 2306.17107#15 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 16 | We have scrutinized the accuracy of our au- tomated evaluation procedure by manually ex- amining a random sample of 200 problems and LLM-generated responses. Our findings reveal that the precision and recall of our method stand at 96% and 98%, respectively.
# 4 Result and Analysis
4.1 Main results The test results2 for are presented in Figure 2 (a), illustrating the accuracy per grade for each model. From the figure, a distinct downward trend in accuracy is evident, signifying that the performance of all models declines as the grade level increases. Although this outcome is somewhat anticipated, given that higher- grade math problems generally present greater difficulty, it is still surprising to observe that half of the models struggle even at grade 1.
GPT-4 emerges as the sole model capable of achieving success (accuracy exceeding 60%) in math tests across all six elementary school grades. Following GPT-4, ChatGPT demon- strates success in tests for grades 1 to 4, but en- counter difficulties in grades 5 and 6. The sub- sequent high-performing model is ChatGLM2- 6B, succeeds only in grades 1 and 2 but dis- plays impressive performance considering its size. The remaining models fail across all grade levels.
Our results reveal that, despite being deemed relatively simple for an average human adult, math word problems at the elementary school level continue to pose challenges for general- purpose open source LLMs. | 2306.16636#16 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 16 | the rewarding outcome wuâ at some future point in time. We refer the reader to App. C for a full derivation. Choice of rewarding outcome. For uâ = sâ, we recover state-based HCA [1]. In the following, we show that a better choice is to use uâ = râ, or an encoding p(uâ | sâ, aâ) of the underlying object that causes the reward. Both options lead to gradient estimators with lower variance (c.f. Section 3.3), while using the latter becomes crucial when different underlying rewarding objects have the same scalar reward (c.f. Section 4). | 2306.16803#16 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 16 | easyocr/utils.py#L643-L709
4
Res ST-VQA OCR-VQA TextVQA DocVQA BLIP-2 [35] â OpenFlamingo [36] â MiniGPT4 [27] â LLaVA [3] â mPLUG-Owl [29] â 2242 21.7 19.3 14.0 22.1 29.3 30.7 27.8 11.5 11.4 28.6 32.2 29.1 18.7 28.9 40.3 4.9 5.1 3.0 4.5 6.9 LLaVA â¡ LLaVAR 2242 24.3 30.2 (+5.9) 10.8 23.4 (+12.6) 31.0 39.5 (+8.5) 5.2 6.2 (+1.0) LLaVA â¡ LLaVAR 3362 28.9 39.2 (+10.3) 11.0 23.8 (+12.8) 36.7 48.5 (+11.8) 6.9 11.6 (+4.7) | 2306.17107#16 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 17 | Our results reveal that, despite being deemed relatively simple for an average human adult, math word problems at the elementary school level continue to pose challenges for general- purpose open source LLMs.
2Results from API are obtained early June, 2023.
Model GPT-4 ChatGPT Chinese-Alpaca-33B â Chinese-Alpaca-13B â â MOSS-16B â Ziya-LLaMA-13B â RKWV-7B â Baichuan-7B â ChatGLM-6B â ChatGLM2-6B G1 G2 G3 G4 G5 G6 â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â
Table 3: Results indicating whether LLMs succeed or fail in solving math problems from each grade level. In the table, G1 to G6 denote grade levels 1 to 6, while â and â represent success and failure, respectively.
# 4.2 Arithmetic Complexity and Reasoning Complexity | 2306.16636#17 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 17 | 3.2 Policy gradient estimators We now show how the contribution coefficients can be used to learn a policy. Building upon HCA [1], we propose the Counterfactual Contribution Analysis (COCOA) policy gradient estimator
ËâU θ V Ï(s0) = âθ log Ï(At | St)Rt + âθÏ(a | St) w(St, a, Ut+k)Rt+k. tâ¥0 aâA kâ¥1 (4)
When comparing to the discounted policy gradient of Eq. 1, we see that the temporal discount factors are substituted by the contribution coefficients, replacing the time heuristic with fine-grained credit assignment. Importantly, the contribution coefficients enable us to evaluate all counterfactual actions instead of only the observed ones, further increasing the quality of the gradient estimator (c.f. Fig. 1A). The contribution coefficients allow for various different gradient estimators (c.f. App. C). For example, independent action samples can replace the sum over all actions, making it applicable to large action spaces. Here, we use the gradient estimator of Eq. 4, as our experiments consist of small action spaces where enumerating all actions is feasible. When U = S, the above estimator is almost equivalent to | 2306.16803#17 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 17 | Table 2: Results (accuracy %) on text-based VQA. We use â to refer to results fetched from [10] and â¡ to refer to our reproduced results. The accuracy metric used by [10] only counts for whether the ground truth appears in the response.
ST-VQA OCR-VQA TextVQA DocVQA (1) LLaVA 28.9 11.0 36.7 6.9 (2) LLaVA + Rpretraining (3) LLaVA + Rï¬netuning (4) LLaVA + Cpretraining (5) LLaVA + Nï¬netuning (6) LLaVAR 36.7 34.1 35.4 34.1 39.2 26.1 21.6 27.0 25.9 23.8 46.5 43.6 45.6 43.3 48.5 9.6 9.5 9.2 10.2 11.6 | 2306.17107#17 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 18 | # 4.2 Arithmetic Complexity and Reasoning Complexity
We now investigate the contributing factors for an LLM to fail in elementary level math word problems. As is introduced in Section 2.2, we focus on two quantities that are approximate measure of the arithmetic complexity and rea- soning complexity of a problem, namely the the number of digits that an LLM needs to manipulate and the number of reasoning steps that an LLM needs to carry out in order to solve a problem. Intuitively, problems with higher arithmetic complexity and/or reasoning complexity should be harder to solve, resulting in lower accuracy.
In Figure 2 (b) and (c), we plot respectively the average test accuracy against one of the complexity measure for each LLM over the en- tire dataset. From the figure, we observe that all modelsâ performance declines as either of the problem complexity measures augments. Judged from the downward slopes of the plots, it is pertinent to say that the reasoning com- plexity of the problem has generally a larger impact than the arithmetic complexity.
# 4.3 Robustness | 2306.16636#18 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 18 | 3Note that Harutyunyan et al. [1] also introduce an estimator based on the return. Here, we focus on the state-based variant as only this variant uses contribution coefficients to compute the policy gradient (Eq. 4). The return-based variant instead uses the hindsight distribution as an action-dependent baseline as shown in Tab. 1. Importantly, the return-based estimator is biased in many relevant environments (c.f. Appendix L).
4
the state-based HCA estimator of Eq. 2, except that it does not need a learned reward model r(s, a). We use the notation HCA+ to refer to this simplified version of the HCA estimator. Theorem 1 below shows that the COCOA gradient estimator is unbiased, as long as the encoding U is fully predictive of the reward, thereby generalizing the results of Harutyunyan et al. [1] to arbitrary rewarding outcome encodings. | 2306.16803#18 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 18 | Table 3: Ablation Study on text-based VQA. All results are from 3362-based models. Rpretraining and Rï¬netuning denote the extra pretraining/ï¬netuning data we collected. Cpretraining refers to using captions instead of OCR results as responses during pretraining. Nï¬netuning refers to using written questions + raw OCR results instead of GPT-generated QA for ï¬netuning.
asking for all visible texts. To address these issues, we follow [3] to generate instruction-following data by prompting text-only GPT-4 [12] with OCR results and captions. | 2306.17107#18 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 19 | In this section, we assess the robustness of LLMs against âirrelevantâ information, which refers to information that relates to the topic of the problem but is inconsequential or unhelpful for its resolution. This type of robustness is of particular interest because real-world prob- lems seldom manifest in an idealized manner where all provided information is useful. Con#Distractors Problem ChatGPT Response GPT-4 Response AF L-HKA 15 FH, VG "Ode TILA, AF BA 10 #. sare T ILA? Rr T 5 H. RCT 5 EH. 0 There were a total of 15 fish in the plate. After the kitten The kitten ate 5 fish. The kitten ate 5 fish. ate some, there were 10 fish left. How many fish did the kitten eat? REAIRM. RFBOHK AS eH. MOLAR T IL B, A 10 Ke vk , peas WERT 5 eG. AA wae 5-10=5 LbeâLT 5 z 15-10 e PUA re T 5 phiee 7 15-105 #&. 1 2 are 3 kittens in the ae : a tote » Kittens ate 15 - 10 = 5 fis house, There were * total of The kittons ate 5 fish. Becanse | TH Kittens | 2306.16636#19 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 19 | Definition 2 (Fully predictive). A rewarding outcome U is fully predictive of the reward R, if the following conditional independence condition holds for all k ⥠0: pÏ(Rk = r | S0 = s, A0 = a, Uk = u) = pÏ(R = r | U = u), where the right-hand side does not depend on the time k. Theorem 1. Assuming that U is fully predictive of the reward (c.f. Definition 2), the COCOA policy gradient estimator ËâU θ V Ï(s0) is unbiased, when using the ground-truth contribution coefficients of Eq. 3, that is
âθV Ï(s0) = ET â¼T (s0,Ï) ËâU θ V Ï(s0).
3.3 Optimal rewarding outcome encoding for low-variance gradient estimators Theorem 1 shows that the COCOA gradient estimators are unbiased for all rewarding outcome encodings U that are fully predictive of the reward. The difference between specific rewarding outcome encodings manifests itself in the variance of the resulting gradient estimator. Proposition 2 shows that for U â² = Sâ² as chosen by HCA there are many cases where the variance of the resulting policy gradient estimator degrades to the high-variance REINFORCE estimator [6]: | 2306.16803#19 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 19 | It is challenging to prompt GPT-4 with fragmented OCR results with a few words to generate nontrivial instructions. To this end, we carefully select 4 out of the previously mentioned 14 clusters (The 3rd, 4th, 6th, and 9th clusters in Figure 8) to collect images with enough visible and coherent sentences. As shown in Figure 2, such ï¬ltering dramatically increases the percentage of book covers and quote images. We randomly select 4K examples from each cluster (no overlap with images used for noisy instruction-following data), yielding a total of 16K images. Following prior work [15, 16, 3], we provide the visualization of verb-noun pairs for instructions generated by GPT-4 in Appendix Figure 10. For those instructions with no verb-noun pair, we demonstrate the frequency of objects being asked in Appendix Figure 9. | 2306.17107#19 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 20 | in the ae : a tote » Kittens ate 15 - 10 = 5 fis house, There were * total of The kittons ate 5 fish. Becanse | TH Kittens ate 15 â10 = 5 fish. 9 BSH tm the plate. Atter e | 15 â 10 =5, the kitten ate 5 fish. kittens ate some, there were 10 fish left. How many fish did the kittens eat? EG PRAA 10 Reh te 5 RR &, 1S HG. Bik â IPG KALE, RRAAF FUME. SoH « FHG, CHF 10 2 3 Ub | 3 a, nm eee eh ea Aie. {..} (Ole AAS eet alates, | tamaetaexnes, ET BEAM 10 RE. Libot, | REBAR IMT SAB. &G? TRB: There were originally 10 carp and | )48"E 7 15 44-10 ⬠5 . - 5 belt fish in the plate, a total of a. 3 There are 3 kittens in the 15 fish. Assuming that the house. There were a total of 15 fish in the plate, including 10 carp and 5 belt fish. After the kitt there till 10 oft. How many fish did the kittens eat? kittens did not eat fish at the beginning, but only observed beside the plate. After the | 2306.16636#20 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.17107 | 20 | Furthermore, based on the system message and two in-context few-shot examples ([37], shown in the Appendix B), we ask GPT-4 to generate conversational data based on OCR results and image captions (Figure 1). The generated questions are used as input instructions, and answers are used as output responses. Concretely, for a given image, we ï¬rst provide two OCR results from EasyOCR and PaddleOCR, which can complement each other. To illustrate the visual elements other than texts within the image, we also provide the image captioning result. To prevent the caption from focusing on the text, we use OCR bounding boxes to mask the text and then use the inpainting to reï¬ll in the mask before using generation captions with BLIP-2 [35]. Note that the generated captions sometimes contain hallucination, which could come from the training data of the captioning model or the âfuzzyâ shapes created by masking/inpainting. We leave generating more detailed and knowledge-enhanced captions [38] for future work.
5 | 2306.17107#20 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 21 | the kitt there till 10 oft. How many fish did the kittens eat? kittens did not eat fish at the beginning, but only observed beside the plate. After the kitten fish, ther till 10 fish left. {...} However, the problem did not give the specific number of carp and belt fish eaten by the ate kittens ate. The kittens ate 15 fish - 10 fish = 5 fish. 3R Al £H, UP 1 MS, 5 fit B. pecs TILA, BEM 10 FG, EPA 4, 2B. Here TL a? There are 3 kittens in the house. There were a total of 15 fish in the plate, including 10 carp and 5 the kitt there we including 8 carp and 2 belt fish. How many fish did the kittens eat? DRC T 5 AMS. FATU HAF L-AWA 10 Hi Fo SRG, PRI AH. GR AF LEM IFS, RPS A WY RPE AR RMS, LERHH Ti Rw OSES. LSA Pol fb LARS be & HEA 10 Foe apy sk, 5 ees, The kittens ate 5 carp. Firstly, we know that there were 10 carp and 5 belt fish in the plate at the beginning, a total of 15 fish. Later, there were still 10 fish left in the plate, of which 8 w and | 2306.16636#21 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 21 | In other words, when all previous actions can be perfectly decoded from a given state, they trivially all contribute to reaching this state. The proof of Propostion 2 follows immediately from observing that pÏ(a | s, sâ²) = 1 for actions a along the observed trajectory, and zero otherwise. Substituting this expression into the contribution analysis gradient estimator (4) recovers REINFORCE. A more general issue underlies this special case: State representations need to contain detailed features to allow for a capable policy but the same level of detail is detrimental when assigning credit to actions for reaching a particular state since at some resolution almost every action will lead to a slightly different outcome. Measuring the contribution towards reaching a specific state ignores that the same rewarding outcome could be reached in slightly different states, hence overvaluing the importance of previous actions and resulting in spurious contributions. Many commonly used environments, such as pixel-based environments, continuous environments, and partially observable MDPs exhibit this property to a large extent due to their fine-grained state representations (c.f. App. G). Hence, our generalization of HCA to rewarding outcomes is a crucial step towards obtaining practical low-variance gradient estimators with model-based credit assignment. | 2306.16803#21 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 21 | Question: Who wrote this book?
GT (1) (2)
Sandra Boynton The book was written by Sandra Boynton. The book was written by Sandra Boyton. It is a childrenâs book that features a family calendar with an accompanying activity guide. The book provides information (...) The cover of the book displays a picture of the family from the calendar, which helps visually represent the topic of the book. The book is written by Sandra Byington. The book was written by Sandra Boynton.
| |
(3) (4) (5) With 500+ Nity StickerS &aSwEllMagnetic Phone List! Moms 291573016171800 FAMII (Pocket) Calendar (...) 2016 SAN- DRA BOYNTON The book was written by Sandra Boynton.
Figure 4: Ablation study based an example from OCR-VQA. GT refers to ground truth, and (1) - (6) are different model variants from Table 3. We replace the excessively long response with (...). | 2306.17107#21 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16803 | 22 | Using rewards as rewarding outcomes yields lowest-variance estimators. The following Theo- rem 3 shows in a simplified setting that (i) the variance of the REINFORCE estimator is an upper bound on the variance of the COCOA estimator, and (ii) the variance of the COCOA estimator is smaller for rewarding outcome encodings U that contain less information about prior actions. We formalize this with the conditional independence relation of Definition 2 by replacing R with U â²: encoding U contains less or equal information than encoding U â², if U â² is fully predictive of U . Combined with Theorem 1 that states that an encoding U needs to be fully predictive of the reward R, we have that taking the reward R as our rewarding outcome encoding U results in the gradient estimator with the lowest variance of the COCOA family.
Theorem 3. Consider an MDP where only the states at a single (final) time step contain a reward, and where we optimize the policy only at a single (initial) time step. Furthermore, consider two rewarding outcome encodings U and U â², where S is fully predictive of U â², U â² fully predictive of U , and U fully predictive of R. Then, the following relation holds between the policy gradient estimators:
θ V Ï(s0)] â¼ V[ ËâU â² | 2306.16803#22 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 22 | Method NAT Subject SOC LAN Context Modality IMG TXT NO Grade G1-6 G7-12 Average Human [39] GPT-3.5 [39] GPT-3.5 w/ CoT [39] LLaMA-Adapter [28] MM-CoTBase [40] MM-CoTLarge [40] LLaVA [3] LLaVA+GPT-4 [3] (judge) Chameleon (GPT-4) [41] 90.23 74.64 75.44 84.37 87.52 95.91 90.36 91.56 89.83 84.97 69.74 70.87 88.30 77.17 82.00 95.95 96.74 74.13 87.48 76.00 78.09 84.36 85.82 90.82 88.00 91.09 89.82 89.60 74.44 74.68 83.72 87.88 95.26 89.49 90.62 88.27 87.50 67.28 67.43 80.32 82.90 88.80 88.00 88.99 77.64 88.10 77.42 79.93 86.90 86.83 92.89 90.66 93.52 92.13 91.59 76.80 78.23 85.83 84.65 92.44 90.93 92.73 88.03 82.42 | 2306.17107#22 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 23 | carp out of the 10 carp, that is, 5
# carp.
Table 4: An example of a math word problem augmented with distractors, alongside the corresponding responses generated by ChatGPT and GPT-4. The column labeled â#Distractorâ indicates the number of distractors injected into the problem. The first row displays the original problem without any distractors, while the subsequent rows show problems augmented with 1, 3, and 5 distractors, respectively. Note that the injected phrase âThere are 3 kittens in the houseâ is considered as a single distractor, whereas âincluding 10 carp and 5 belt fishâ is regarded as a combination of two distractors, as the latter contains two distracting numerals. In the table, the ChatGPT responses are cherry-picked to illustrate certain behaviors, but the GPT-4 responses are not. Upon examining the model responses, we observe that ChatGPT is sometimes influenced by the injected distractors, resulting in erroneous reasoning, while GPT-4 consistently focuses on the relevant information, thereby producing correct and concise responses.
(a) (b) (c) (d) | 2306.16636#23 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 23 | θ V Ï(s0)] â¼ V[ ËâU â²
θ V Ï(s0)] â¼ V[ ËâS θ V Ï(s0) the COCOA estimator (4) using U = X, V[Y ] the covariance matrix of Y and
with ËâX A â¼ B indicating that B â A is positive semi-definite.
As Theorem 3 considers a simplified setting, we verify empirically whether the same arguments hold more generally. We construct a tree environment where we control the amount of information a
5
Cc +=0 (@) more informative âencoding Var dB ©) ke / RCN . â+ OOOO O© â 2 State overlap | 2306.16803#23 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16636 | 24 | (a) (b) (c) (d)
Figure 2: (a) (b) (c): The plot of average test accuracy against one of the problem complexity measures, including grade, number of reasoning steps and number of digits, for each LLM. (d): The plot of average test accuracy against the number of distractors on the distractor dataset, for the top performant models.
sequently, it is vital for LLMs to effectively dis- cern the pertinent information from the prob- lem statement and utilize it to derive a solution.
To achieve this, we manually created a small âdistractor datasetâ comprising 60 examples, 10 for each grade level. Each example con- sists of an original problem and five associated problems augmented with 1 5 piece(s) of irrelevant information which we refer to as dis- tractor(s). We require that each distractor must contain exactly one number and fit seem- lessly into the original problem statement. An example is given in Table 4. | 2306.16636#24 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 24 | 5
Cc +=0 (@) more informative âencoding Var dB ©) ke / RCN . â+ OOOO O© â 2 State overlap
Figure 2: HCA suffers from spurious contributions which can be alleviated by using less informative rewarding outcome encodings. (A) and (C): Schematic of the tree environment where we parametrically adjust the amount of overlap between states by varying the amount of shared children of two neighboring nodes. We can decrease the information content of the rewarding outcome encoding u = f (s, a) by grouping state-action pairs that share the same reward value. (B) Normalized variance in dB using ground-truth coefficients and a random uniform policy (shaded region represents standard error over 10 random environments) comparing REINFORCE, HCA, COCOCA-reward and various degrees of intermediate grouping. | 2306.16803#24 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 24 | Table 4: Results (accuracy %) on Science QA dataset. All baseline results are from [3, 41]. The categories are denoted as NAT: natural science, SOC: social science, LAN: language science, TXT: text context, IMG: image context, NO: no context, G1-6: grades 1-6, G7-12: grades 7-12.
# 4 Model Architecture and Training
Architecture We use the same model architecture as LLaVA. For the visual encoder V , we use CLIP-ViT-L/14 for 2242 resolution and CLIP-ViT-L/14-336 for 3362 resolution. The grid features before the last Transformer layer are then transformed into the word embedding space of the language decoder through a trainable projection matrix W . Vicuna-13B [17], a LLaMA-based [19] instruction-tuned language model, is used as the language decoder D. | 2306.17107#24 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 25 | We tested the top-performing LLMs on the distractor dataset, and the result is plotted in Figure 2 (d). From the figure, we observe that the performance of all LLMs, with the excep- tion of GPT-4, drops drastically as the number of distractors increases. Notably, ChatGPT suffers an accuracy drop of 30% for problems augmented with merely two distractors. In con- trast, GPT-4 only experiences minor degrada- tion. In Table 4 we give examples of ChatGPT and GPT4 responses to the the augmented problems, revealing that the behavior of Chat- GPT and GPT-4 are qualitatively different
against distractors. It can be clearly seen that ChatGPT is easily distracted by the injected information, resulting in erroneous reasoning process and conclusion, while GPT-4 is able to always stick to the relevant information.
Based on this experiment, we conclude that among the models assessed in this work GPT-4 is the only one that exhibits robustness against the distractors.
# 5 Related Work | 2306.16636#25 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 25 | state contains about the previous actions by varying the overlap of the children of two neighbouring nodes (c.f. Fig 2), and assign a fixed random reward to each state-action pair. We compute the ground-truth contribution coefficients by leveraging dynamic programming (c.f. Section 4). Fig. 2B shows that the variance of HCA is as high as REINFORCE for zero state overlap, but improves when more states overlap, consistent with Proposition 2 and Theorem 3. To investigate the influence of the information content of U on the variance, we consider rewarding outcome encodings U with increasing information content, which we quantify with how many different values of u belong to the same reward r. Fig. 2B shows that by increasing the information content of U , we interpolate between the variance of COCOA with u = r and HCA+, consistent with Theorem 3.
Why do rewarding outcome encodings that contain more information than the reward lead to higher variance? To provide a better intuition on this question we use the following theorem: Theorem 4. The policy gradient on the expected number of occurrences O7 (uâ, 8) = So;,51 "(Uk = uâ | So = 8) is proportional to | 2306.16803#25 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 25 | Training Similarly, we follow the two-stage training design of LLaVA (Figure 3). The training objectives of both stages are the same: generate output responses (<res>) for the input instructions (<ins>). The transformed image tokens (<img>) are added either before or after the ï¬rst input instruction. (i) During the ï¬rst pretraining stage, only the projection matrix W is trained for feature alignment. Since the decoder D is frozen, the training tolerates noisy data. We combine the 595K pretraining data from LLaVA with our 422K noisy instruction-following data in the pretraining stage. (ii) Both the projection matrix W and the language decoder D are trained during the ï¬netuning stage, where we merge our 16K instruction-following data into the 158K instruction-following data from LLaVA as the training set. Note that the visual encoder is frozen throughout the whole training period, which might restrict the performance of text recognition as CLIP is trained for general-purpose text-image alignment. Better choices of the visual encoder [42] or further ï¬ne-tuning CLIP-ViT [29] might further beneï¬t the visual understanding capability, which we leave as future work.
6 | 2306.17107#25 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 26 | Based on this experiment, we conclude that among the models assessed in this work GPT-4 is the only one that exhibits robustness against the distractors.
# 5 Related Work
Math-related datasets are predominantly in English (Hendrycks et al., 2021b; Amini et al., 2019; Cobbe et al., 2021; Hendrycks et al., 2021a), making them unsuitable for evalu- ating Chinese LLMs. Among the Chinese math-related datasets, AGI-Eval (Zhong et al., 2023) and C-Eval (Huang et al., 2023) tar- get general-purpose, multi-disciplinary evalua- tion for LLMs and contain subsets specifically designed for assessing mathematical abilities. However, the math problems in these datasets, ranging from middle school to college level, are considerably more difficult than those in our CMATH dataset, rendering it challenging to accurately measure progress given the current
capabilities of LLMs. Math23K (Wang et al., 2017) and APE210K (Zhao et al., 2020) are datasets comprising elementary school level math word problems, which are more similar to our CMATH. However, a drawback of these datasets is the absence of fine-grained annota- tions, such as grade, number of reasoning steps, etc., making it impossible to obtain detailed evaluation results from them.
# 6 Conclusion | 2306.16636#26 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.17107 | 26 | 6
LLaVA (Original) LLaVA LLaVAR Res Conversation Detail Complex Read 2242 3362 3362 83.1 83.9 84.5 75.3 78.2 78.9 96.5 95.3 96.5 - 87.9 91.7
Table 5: Relative scores (w.r.t. text-only GPT-4) for instruction-following questions, where the ï¬rst three dimensions are based on natural images, the last dimension (âReadâ) is based on text-rich images. In the ï¬rst row, we show the original results (2242-based) fetched from [3]. We report our reproduced LLaVA on 3362 resolution for a fair comparison.
Question: Based on the title and the image on the cover, what can be inferred about the content of "Boâs Lasting Lessons" and its potential target audience? | 2306.17107#26 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 27 | # 6 Conclusion
This work presents CMATH, a dataset enabling fine-grained evaluation of LLMs on elementary school level math word problems. Our evalua- tion on CMATH shows that all LLMs, with the exception of GPT-4, falters at a certain grade. Moreover, our investigation into the robust- ness of LLMs under the presence of distracting information further underscores the superior performance of GPT-4, as it remains the only model to maintain its robustness amidst such challenges. We anticipate that the this will not only expose existing limitations in LLMsâ capabilities but also serve as a catalyst for their ongoing development and improvement.
# References
Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel- Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based for- malisms.
Baichuan inc. 2023. Baichuan-7B. https: //github.com/baichuan-inc/baichuan-7B/blob/ main/README_EN.md. | 2306.16636#27 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 27 | Recall that the COCOA gradient estimator consists of individual terms that credit past actions a at state s for a current reward r encountered in u according to }>,< 4 Vor(a | s)w(s,a,u')râ (c.f. Eq. 4). Theorem 4 indicates that each such term aims to increase the average number of times we encounter wuâ in a trajectory starting from s, proportional to the corresponding reward râ. If U' = Râ, this update will correctly make all underlying states with the same reward râ more likely while decreasing the likeliness of all states for which uâ 4 râ. Now consider the case where our rewarding outcome encoding contains a bit more information, i.e. Uâ = f(Râ, ASâ) where ASâ contains a little bit of information about the state. As a result the update will distinguish some states even if they yield the same reward and increase the number of occurrences only of states containing the encountered ASâ while decreasing the number of occurrences for unseen ones. As in a single trajectory, we do not visit each possible ASâ, this adds variance. The less information an encoding U has, the more underlying states it groups together, and hence the less rewarding outcomes are âforgottenâ in the gradient estimator, leading to lower variance. | 2306.16803#27 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 27 | GPT-4 (Text-Only) LLaVA LLaVAR From the title and image, it can be inferred that "Boâs Lasting Lessons" is a book about leadership lessons taught by a legendary coach, likely in the context of sports. The image of men running suggests that the book might be focused on a speciï¬c sport, such as football or track and ï¬eld. The target audience could be individuals interested in coaching, sports, or leadership in general, looking to learn from the experiences and teachings of a successful coach. The book "Boâs Lasting Lessons: The Timeless Teachings of Bo Schem- bechler" is written by Bo Schembechler and Jim Harshaw. Given the title and the image on the cover, which shows a football coach running across the ï¬eld with his players in pursuit, it can be inferred that the content of the book is likely focused on football and the strategies, tech- niques, or principles that can help players improve their performance and achieve success in the sport. The potential target audience for this book could be football players, coaches, and fans who are interested in learning more about the game and the experiences and insights of a seasoned | 2306.17107#27 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
2306.16636 | 28 | S´ebastien Bubeck, Varun Chandrasekaran, Ro- nen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experi- ments with gpt-4.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavar- ian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems.
Yiming Cui, Ziqing Yang, and Xin Yao. 2023. Effi- cient and effective text encoding for chinese llama and alpaca. arXiv preprint arXiv:2304.08177.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM: general language model pretraining with autoregressive blank infilling. pages 320â335. | 2306.16636#28 | CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? | We present the Chinese Elementary School Math Word Problems (CMATH) dataset,
comprising 1.7k elementary school-level math word problems with detailed
annotations, source from actual Chinese workbooks and exams. This dataset aims
to provide a benchmark tool for assessing the following question: to what grade
level of elementary school math do the abilities of popular large language
models (LLMs) correspond? We evaluate a variety of popular LLMs, including both
commercial and open-source options, and discover that only GPT-4 achieves
success (accuracy $\geq$ 60\%) across all six elementary school grades, while
other models falter at different grade levels. Furthermore, we assess the
robustness of several top-performing LLMs by augmenting the original problems
in the CMATH dataset with distracting information. Our findings reveal that
GPT-4 is able to maintains robustness, while other model fail. We anticipate
that our study will expose limitations in LLMs' arithmetic and reasoning
capabilities, and promote their ongoing development and advancement. | http://arxiv.org/pdf/2306.16636 | Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230629 | 20230629 | [
{
"id": "2210.02414"
},
{
"id": "2305.08322"
},
{
"id": "2304.08177"
}
] |
2306.16803 | 28 | 3.4 Learning the contribution coefficients In practice, we do not have access to the ground-truth contribution coefficients, but need to learn them from observations. Following Harutyunyan et al. [1], we can approximate the hindsight distribution pÏ(At = a | St = s, U â² = uâ²), now conditioned on rewarding outcome encodings instead of states, by training a model h(a | s, uâ²) on the supervised discriminative task of classifying the observed action at given the current state st and some future rewarding outcome uâ². Note that if the model h does not approximate the hindsight distribution perfectly, the COCOA gradient estimator (4) can be biased (c.f. Section 4). A central difficulty in approximating the hindsight distribution is that it is policy dependent, and hence changes during training. Proposition 5 shows that we can provide the policy logits as an extra input to the hindsight network without altering the learned hindsight
6 | 2306.16803#28 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.17107 | 28 | The potential target audience for this book could be football players, coaches, and fans who are interested in learning more about the game and the experiences and insights of a seasoned coach like Bo Schembechler. "Boâs Lasting Lessons" appears to be a book that shares the legacy coach Bo Schembechlerâs teachings on leadership and timeless funda- mentals. The image on the cover, featuring a football coach leading his team, further supports this inference. The book might be targeted towards sports enthusiasts, particularly fans of football, and people interested in learning about leadership principles and fundamentals from a renowned coachâs perspective. The audiobook, read by John H. Mayer and featuring a foreword and conversation with John U. Bacon, suggests that it could also appeal to audiences who prefer listening to books or want to gain more insights from the author and other experts. 129% | 2306.17107#28 | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Instruction tuning unlocks the superior capability of Large Language Models
(LLM) to interact with humans. Furthermore, recent instruction-following
datasets include images as visual inputs, collecting responses for image-based
instructions. However, visual instruction-tuned models cannot comprehend
textual details within images well. This work enhances the current visual
instruction tuning pipeline with text-rich images (e.g., movie posters, book
covers, etc.). Specifically, we first use publicly available OCR tools to
collect results on 422K text-rich images from the LAION dataset. Moreover, we
prompt text-only GPT-4 with recognized texts and image captions to generate 16K
conversations, each containing question-answer pairs for text-rich images. By
combining our collected data with previous multi-modal instruction-following
data, our model, LLaVAR, substantially improves the LLaVA model's capability on
text-based VQA datasets (up to 20% accuracy improvement) while achieving an
accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following
evaluation also demonstrates the improvement of our model on both natural
images and text-rich images. Through qualitative analysis, LLaVAR shows
promising interaction (e.g., reasoning, writing, and elaboration) skills with
humans based on the latest real-world online content that combines text and
images. We make our code/data/models publicly available at
https://llavar.github.io/. | http://arxiv.org/pdf/2306.17107 | Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun | cs.CV, cs.CL | Preprint. Work in progress | null | cs.CV | 20230629 | 20230629 | [
{
"id": "2306.02858"
},
{
"id": "2210.08402"
},
{
"id": "2305.03726"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.