doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.16636
29
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021a. Measuring massive multitask language understanding. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the math dataset. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. LoRA: Low-rank adap- tation of large language models. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322. IDEA-CCNL. 13b-v1. Ziya-LLaMA-13B-v1/blob/main/README.md.
2306.16636#29
CMATH: Can Your Language Model Pass Chinese Elementary School Math Test?
We present the Chinese Elementary School Math Word Problems (CMATH) dataset, comprising 1.7k elementary school-level math word problems with detailed annotations, source from actual Chinese workbooks and exams. This dataset aims to provide a benchmark tool for assessing the following question: to what grade level of elementary school math do the abilities of popular large language models (LLMs) correspond? We evaluate a variety of popular LLMs, including both commercial and open-source options, and discover that only GPT-4 achieves success (accuracy $\geq$ 60\%) across all six elementary school grades, while other models falter at different grade levels. Furthermore, we assess the robustness of several top-performing LLMs by augmenting the original problems in the CMATH dataset with distracting information. Our findings reveal that GPT-4 is able to maintains robustness, while other model fail. We anticipate that our study will expose limitations in LLMs' arithmetic and reasoning capabilities, and promote their ongoing development and advancement.
http://arxiv.org/pdf/2306.16636
Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230629
20230629
[ { "id": "2210.02414" }, { "id": "2305.08322" }, { "id": "2304.08177" } ]
2306.16803
29
6 A B ~ © — cocon reward COcoAeaire — HCAs Ho Avot @artle — Asvoniage REINFORCE —Traiov [nena nnenEnnInEEE 1 ] 3 aa 8 ——— Es 8 B 084) gos te a ot 3 3 06 3 06 f pickup ignore B 064) 2 rey ey i 5 aan tl : we 2 o4 © 04 pickup gi 5 : pie? 2 ool a i | | § 02 § 02 Tiong sequence & e PeoPaistactor rewards 0 T 1 0 T we" | rr rr er er rr rr er rr Episode Episode c D COCOA reward Qcritic 100-1 COCOA Feature * Advantage j HCA+ REINFORCE 0 i HCA-return Trajcv co | 2 504 | ae Pa S 8 g 4 & ae ee -s0 y & of LEED F gp 5 <g> § woo ees Loy: { -50 “1001 — ot, : - -50 0 50 Ao, ee se ea ny “ag tog MeL bias 48
2306.16803#29
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
29
Table 6: An example for our collected evaluation data, corresponding to the “Read” dimension of GPT-4-based instruction-following evaluation. The corresponding image is Figure 11 in Appendix. # 5 Experiments We use the same training parameters as LLaVA8, except (1) We set the maximum sequence length to 1024 during pretraining. (2) We first pad any given image to a square shape before resizing it to the desired input size, avoiding some image content being cropped during the preprocessing. For both resolutions (2242, 3362), we reproduce the original LLaVA for a fair comparison. The GPT-4 model used in this work refers to the gpt-4-0314 version, while the cost for collecting finetuning data is around $300. Temperature is set to 1.0 for training data generation, 0.7 for evaluation data generation, and 0.2 for GPT-4-based evaluation. All experiments are run on NVIDIA A100 80GB GPU. During the evaluation, the temperature is set to 0.9 for training text-based VQA, 0.7 for GPT-4-based instruction-following evaluation, and 0.2 for other qualitative demonstrations. # 8https://github.com/haotian-liu/LLaVA 7
2306.17107#29
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16636
30
IDEA-CCNL. 13b-v1. Ziya-LLaMA-13B-v1/blob/main/README.md. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. Codegen: An open large language model for code with multi-turn program synthesis. OpenAI. 2023. GPT-4 technical report. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel- ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Bo Peng. 2023. RWKV-4-raven. https:// huggingface.co/BlinkDL/rwkv-4-raven.
2306.16636#30
CMATH: Can Your Language Model Pass Chinese Elementary School Math Test?
We present the Chinese Elementary School Math Word Problems (CMATH) dataset, comprising 1.7k elementary school-level math word problems with detailed annotations, source from actual Chinese workbooks and exams. This dataset aims to provide a benchmark tool for assessing the following question: to what grade level of elementary school math do the abilities of popular large language models (LLMs) correspond? We evaluate a variety of popular LLMs, including both commercial and open-source options, and discover that only GPT-4 achieves success (accuracy $\geq$ 60\%) across all six elementary school grades, while other models falter at different grade levels. Furthermore, we assess the robustness of several top-performing LLMs by augmenting the original problems in the CMATH dataset with distracting information. Our findings reveal that GPT-4 is able to maintains robustness, while other model fail. We anticipate that our study will expose limitations in LLMs' arithmetic and reasoning capabilities, and promote their ongoing development and advancement.
http://arxiv.org/pdf/2306.16636
Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230629
20230629
[ { "id": "2210.02414" }, { "id": "2305.08322" }, { "id": "2304.08177" } ]
2306.16803
30
Figure 3: COCOA enhances policy gradient estimates and sample efficiency whereas HCA fails to improve over baselines. (A) Schematic representation of the linear key-to-door environment. (B) Performance of COCOA and baselines on the main task of picking up the treasure, measured as the average fraction of treasure rewards collected. (Left) ground-truth policy gradient estimators computed using dynamic programming, (right) learning the contribution coefficients or (action- )value function using neural networks. Shaded regions are the standard error (30 seeds). (C) Violin plot of the signal-to-noise ratio (SNR) in Decibels for the various policy gradient estimators with learned coefficients and (action-)value functions, computed on the same trajectories of a shared base policy. (D) Comparison of the bias-variance trade-off incurred by different policy gradient estimators, computed as in (C), normalized by the ground-truth policy gradient norm (scatter plot showing 30 seeds per method).
2306.16803#30
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
30
# 8https://github.com/haotian-liu/LLaVA 7 H: Mitton Stewart School of Industrial and Systems Engineering No. 1 GRADUATE PROGRAM Georgia Tech's H. Milton Stewart School of Industrial ‘and Systems Engineering is the best in the nation ‘according to U.S. News & World Report for 33 CONSECUTIVE YEARS If I want to study optimization, is Georgia Tech a good place for me? Yes, Georgia Tech is a good place for you if you want to study optimization. The image mentions that Georgia Tech's H. Milton Stewart School of Industrial and Systems Engineering is ranked as the best in the nation according to the U.S. News & World Report for 33 consecutive years. This indicates that the school has a strong reputation and offers quality education in the field of optimization and industrial engineering.
2306.17107#30
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16636
31
Bo Peng. 2023. RWKV-4-raven. https:// huggingface.co/BlinkDL/rwkv-4-raven. Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kran- thi Kiran GV, Xuzheng He, Haowen Hou, Prze- myslaw Kazienko, Jan Kocon, Jiaming Kong, Bartlomiej Koptyra, Hayden Lau, Krishna Sri Ip- sit Mantri, Ferdinand Mom, Atsushi Saito, Xian- gru Tang, Bolun Wang, Johan S. Wind, Stansilaw Wozniak, Ruichong Zhang, Zhenyuan Zhang, Qi- hang Zhao, Peng Zhou, Jian Zhu, and Rui-Jie Zhu. 2023. RWKV: Reinventing RNNs for the transformer era. Tianxiang Sun and Xipeng Qiu. 2023. MOSS. https://github.com/OpenLMLab/MOSS/ Github. blob/main/README_en.md.
2306.16636#31
CMATH: Can Your Language Model Pass Chinese Elementary School Math Test?
We present the Chinese Elementary School Math Word Problems (CMATH) dataset, comprising 1.7k elementary school-level math word problems with detailed annotations, source from actual Chinese workbooks and exams. This dataset aims to provide a benchmark tool for assessing the following question: to what grade level of elementary school math do the abilities of popular large language models (LLMs) correspond? We evaluate a variety of popular LLMs, including both commercial and open-source options, and discover that only GPT-4 achieves success (accuracy $\geq$ 60\%) across all six elementary school grades, while other models falter at different grade levels. Furthermore, we assess the robustness of several top-performing LLMs by augmenting the original problems in the CMATH dataset with distracting information. Our findings reveal that GPT-4 is able to maintains robustness, while other model fail. We anticipate that our study will expose limitations in LLMs' arithmetic and reasoning capabilities, and promote their ongoing development and advancement.
http://arxiv.org/pdf/2306.16636
Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230629
20230629
[ { "id": "2210.02414" }, { "id": "2305.08322" }, { "id": "2304.08177" } ]
2306.16803
31
distribution, to make the parameters of the hindsight network less policy-dependent. This observation justifies and generalizes the strategy of adding the policy logits to the hindsight model output, as proposed by Alipov et al. [29]. Proposition 5. p"(a | s,u’,l) = p"(a| s,u’), with 1 a deterministic function of s, representing the sufficient statistics of x(a | s). As an alternative to learning the hindsight distribution, we can directly estimate the probability ratio p" (Ay =a| S; = s8,U' = w’)/r(a | s) using a contrastive loss (c.f. App. D). Yet another path builds on the observation that the sums 5>,., p”(Ui4% = w’ | 8, a) are akin to Successor Representations and can be learned via temporal difference updates [30, 31] (c.f. App. D). We experimented both with the hindsight classification and the contrastive loss and found the former to work best in our experiments. We leverage the Successor Representation to obtain ground truth contribution coefficients via dynamic programming for the purpose of analyzing our algorithms. # 4 Experimental analysis
2306.16803#31
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
31
Bringing Generative Al into Creative Cloud with Adobe Firefly Davie Wadhwa ena vinf@ Adobe Firefly beta Pretend you are the author, write the next paragraph of this article. Adobe Firefly is an exciting new tool that allows users to bring generative AI into their creative workflow. By integrating Firefly with Adobe's popular Creative Cloud, users can unlock new levels of creativity and automation, streamlining their design processes and enabling them to explore more innovative ideas. With Firefly, designers can explore data-driven design, real-time collaborative editing, and seamless integration with other Adobe products, making the entire creative process more efficient and intuitive.
2306.17107#31
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16636
32
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large lan- guage model for science. THUDM. 2023a. ChatGLM-6B. https://github. com/THUDM/ChatGLM-6B/blob/main/README_en.md. THUDM. 2023b. ChatGLM2-6B. https: //github.com/THUDM/ChatGLM2-6B/blob/main/ README_EN.md. Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 845–854, Copenhagen, Denmark. Association for Computational Linguistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of- thought prompting elicits reasoning in large lan- guage models.
2306.16636#32
CMATH: Can Your Language Model Pass Chinese Elementary School Math Test?
We present the Chinese Elementary School Math Word Problems (CMATH) dataset, comprising 1.7k elementary school-level math word problems with detailed annotations, source from actual Chinese workbooks and exams. This dataset aims to provide a benchmark tool for assessing the following question: to what grade level of elementary school math do the abilities of popular large language models (LLMs) correspond? We evaluate a variety of popular LLMs, including both commercial and open-source options, and discover that only GPT-4 achieves success (accuracy $\geq$ 60\%) across all six elementary school grades, while other models falter at different grade levels. Furthermore, we assess the robustness of several top-performing LLMs by augmenting the original problems in the CMATH dataset with distracting information. Our findings reveal that GPT-4 is able to maintains robustness, while other model fail. We anticipate that our study will expose limitations in LLMs' arithmetic and reasoning capabilities, and promote their ongoing development and advancement.
http://arxiv.org/pdf/2306.16636
Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230629
20230629
[ { "id": "2210.02414" }, { "id": "2305.08322" }, { "id": "2304.08177" } ]
2306.16803
32
# 4 Experimental analysis To systematically investigate long-term credit assignment performance of COCOA compared to standard baselines, we design an environment which pinpoints the core credit assignment problem and leverage dynamic programming to compute ground-truth policy gradients, contribution coefficients, and value functions (c.f. App E.2). This enables us to perform detailed bias-variance analyses and to disentangle the theoretical optimal performance of the various gradient estimators from the approximation quality of learned contribution coefficients and (action-)value functions. We consider the linear key-to-door environment (c.f. Fig. 3A), a simplification of the key-to-door environment [3, 4, 32] to a one-dimensional track. Here, the agent needs to pick up a key in the first time step, after which it engages in a distractor task of picking up apples with varying reward values. Finally, it can open a door with the key and collect a treasure. This setting allows us to parametrically increase the difficulty of long-term credit assignment by increasing the distance between the key and door, making it harder to pick up the learning signal of the treasure reward 7
2306.16803#32
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
32
Cloud with Adobe Firefly Davie Wadhwa ena vinf@ Adobe Firefly beta Huang @jbhuang0604 - th iF q i) How to cite papers? Citing papers properly ~ gives credit where credit's due, — provides supporting evidence of your claim, and © presents an organized view of related work. Sharing some tips | found useful. YOU GET A CITATION! AND YOU GETA CITATION! ‘AND YOU GET A) CITATION!) EVERYONE GETS A CITATION! Oo 19 ln 2,427 4 Adobe Firefly is an exciting new tool that allows users to bring generative AI into their creative workflow. By integrating Firefly with Adobe's popular Creative Cloud, users can unlock new levels of creativity and automation, streamlining their design processes and enabling them to explore more innovative ideas. With Firefly, designers can explore data-driven design, real-time collaborative editing, and seamless integration with other Adobe products, making the entire creative process more efficient and intuitive. What is this tweet about? This tweet is about sharing tips on how to cite papers properly, giving credit where credit is due, providing support for evidence of your claim, and presenting an organized view
2306.17107#32
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16636
33
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. GLM-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414. Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang, and Jingming Liu. 2020. Ape210k: A large-scale and template-rich dataset of math word prob- lems. Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foun- dation models.
2306.16636#33
CMATH: Can Your Language Model Pass Chinese Elementary School Math Test?
We present the Chinese Elementary School Math Word Problems (CMATH) dataset, comprising 1.7k elementary school-level math word problems with detailed annotations, source from actual Chinese workbooks and exams. This dataset aims to provide a benchmark tool for assessing the following question: to what grade level of elementary school math do the abilities of popular large language models (LLMs) correspond? We evaluate a variety of popular LLMs, including both commercial and open-source options, and discover that only GPT-4 achieves success (accuracy $\geq$ 60\%) across all six elementary school grades, while other models falter at different grade levels. Furthermore, we assess the robustness of several top-performing LLMs by augmenting the original problems in the CMATH dataset with distracting information. Our findings reveal that GPT-4 is able to maintains robustness, while other model fail. We anticipate that our study will expose limitations in LLMs' arithmetic and reasoning capabilities, and promote their ongoing development and advancement.
http://arxiv.org/pdf/2306.16636
Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, Bin Wang
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230629
20230629
[ { "id": "2210.02414" }, { "id": "2305.08322" }, { "id": "2304.08177" } ]
2306.16803
33
7 among a growing number of varying distractor rewards [4]. We use the signal-to-noise ratio, SNR= ∥∇θV π∥2/E[∥ ˆ∇θV π −∇θV π∥2], to quantify the quality of the different policy gradient estimators; a higher SNR indicates that we need fewer trajectories to estimate accurate policy gradients [33]. Previously, we showed that taking the reward as rewarding outcome encoding results in the lowest- variance policy gradients when using ground-truth contribution coefficients. In this section, we will argue that when learning the contribution coefficients, it is beneficial to use an encoding u of the underlying rewarding object since this allows to distinguish different rewarding objects when they have the same scalar reward value and allows for quick adaptation when the reward function but not the environment dynamics changes.
2306.16803#33
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
33
This tweet is about sharing tips on how to cite papers properly, giving credit where credit is due, providing support for evidence of your claim, and presenting an organized view of related work. It also humorously emphasizes the importance of citations by mentioning that "everyone gets a citation" in a playful manner. By the way, who is the woman in this meme? Any background information? The woman in the meme is Oprah Winfrey, a famous television host, author, and philanthropist. She is known for her influential book club and her ability to make people feel good about themselves. In this meme, she is seen enthusiastically endorsing the concept of citing sources in academic work.
2306.17107#33
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
34
We study two variants of COCOA, COCOA-reward which uses the reward identity for U , and COCOA-feature which acquires features of rewarding objects by learning a reward model r(s, a) and taking the penultimate network layer as U . We learn the contribution coefficients by approximating the hindsight distribution with a neural network classifier h(a | s, u′, l) that takes as input the current state s, resulting policy logits l, and rewarding outcome u′, and predicts the current action a (c.f. App. E for all experimental details). As HCA+ (c.f. Tab. 1) performs equally or better compared to HCA [1] in our experiments (c.f. App. F), we compare to HCA+ and several other baselines: (i) three classical policy gradient estimators, REINFORCE, Advantage and Q-critic, (ii) TrajCV [34], a state-of-the-art control variate method that uses hindsight information in its baseline, and (iii) HCA-return [1], a different HCA variant that uses the hindsight distribution conditioned on the return as an action-dependent baseline (c.f. Tab. 1). # 4.1 COCOA improves sample-efficiency due to favorable bias-variance trade-off.
2306.16803#34
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
34
Huang @jbhuang0604 - th iF q i) How to cite papers? Citing papers properly ~ gives credit where credit's due, — provides supporting evidence of your claim, and © presents an organized view of related work. Sharing some tips | found useful. YOU GET A CITATION! AND YOU GETA CITATION! ‘AND YOU GET A) CITATION!) EVERYONE GETS A CITATION! Oo 19 ln 2,427 4 What is this tweet about? This tweet is about sharing tips on how to cite papers properly, giving credit where credit is due, providing support for evidence of your claim, and presenting an organized view of related work. It also humorously emphasizes the importance of citations by mentioning that "everyone gets a citation" in a playful manner. By the way, who is the woman in this meme? Any background information? The woman in the meme is Oprah Winfrey, a famous television host, author, and philanthropist. She is known for her influential book club and her ability to make people feel good about themselves. In this meme, she is seen enthusiastically endorsing the concept of citing sources in academic work. Figure 5: Transferred instruction-following capability of LLaVAR. # 5.1 Quantitative Analysis
2306.17107#34
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
35
# 4.1 COCOA improves sample-efficiency due to favorable bias-variance trade-off. To investigate the quality of the policy gradient estimators of COCOA, we consider the linear key-to- door environment with a distance of 100 between key and door. Our dynamic programming setup allows us to disentangle the performance of the estimators independent of the approximation quality of learned models by using ground truth contribution coefficients and (action-)value functions. The left panel of figure 3B reveals that in this ground truth setting, COCOA-reward almost immediately solves the task performing as well as the theoretically optimal Q-critic with a perfect action-value function. This is in contrast to HCA and HCA-return which perform barely better than REINFORCE, all failing to learn to consistently pick up the key in the given number of episodes. This result translates to the setting of learning the underlying models using neural networks. COCOA-reward and -feature outperform competing policy gradient estimators in terms of sample efficiency while HCA only improves over REINFORCE. Notably, having to learn the full action-value function leads to a less sample-efficient policy gradient estimator for the Q-critic.
2306.16803#35
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
36
In Figure 3C and D we leverage dynamic programming to compare to the ground truth policy gradient. This analysis reveals that improved performance of COCOA is reflected in a higher SNR compared to other estimators due to its favorable bias-variance trade-off. Fig 12 in App. F indicates that COCOA maintains a superior SNR, even when using significantly biased contribution coefficients. As predicted by our theory in Section 3.3, HCA significantly underperforms compared to baselines due to its high variance caused by spurious contributions. In particular, the Markov state representation of the linear key-to-door environment contains the information of whether the key has been picked up. As a result, HCA always credits picking up the key or not, even for distractor rewards. These spurious contributions bury the useful learning signal of the treasure reward in noisy distractor rewards. HCA- return performs poorly as it is a biased gradient estimator, even when using the ground-truth hindsight distribution (c.f. Appendix F and L). Interestingly, the variance of COCOA is significantly lower compared to a state-of-the-art control variate method, TrajCV, pointing to a potential benefit
2306.16803#36
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
36
H: When will this movie be released? L: The movie "The Call of the Ottoman" starring Daniel Craig is expected to be released on February 14th, 2023. R: The movie will be released on January 13. H: Who starred in the movie? L: The movie stars Tom Cruise and Cobie Smulders. R: Tom Hanks starred in the movie. H: What is this movie about? L: The movie "A Man Called Ove" is a comedy- drama film directed by Hannes Holm. It tells the story of a grumpy old man named Ove, who is played by Rolf Lassgård. Ove is a widower who lives a solitary life in a small apartment. (...) R: The movie is about Tom Hanks, who is called Ottol, and it is an international bestseller based on a true story. The movie will be released on January 13th. H: Can you describe the given image? L: The image is a movie poster featuring a man and a cat. The man, who appears to be a character from the movie “The Ugly Truth,” (...) R: The image features a man standing in front of a house with a cat beside him. H: Who directed the film? L: The
2306.17107#36
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
37
Interestingly, the variance of COCOA is significantly lower compared to a state-of-the-art control variate method, TrajCV, pointing to a potential benefit of the multiplicative interactions between contribution coefficients and rewards in the COCOA estimators, compared to the additive interaction of the control variates: the value functions used in TrajCV need to approximate the full average returns, whereas COCOA can ignore rewards from the distractor subtask, by multiplying them with a contribution coefficient of zero.
2306.16803#37
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
38
# 4.2 COCOA enables long-term credit assignment by disentangling rewarding outcomes. The linear key-to-door environment allows us to parametrically increase the difficulty of the long- term credit assignment problem by increasing the distance between the key and door and thereby increasing the variance due to the distractor task [4]. Figure 1B reveals that as this distance increases, performance measured as the average fraction of treasure collected over a fixed number of episodes 8 a : B — cocoa-reward Advantage C context jes COCOAfeature ——— REINFORCE B Context2 = Context 3 Plea HeA+ Tracy 3 Context 4 Soe Georitic Context 5 juery room Juery room MN LOTE oan Query Query 8 wae aad er tel S 09 / Au 8 kK te htt 8 08 et £ Context 4 Query om Sa 8 o7 5 contoxt2 ge “2 5 pconten 3 Ze 0s - oO Sample next & 06 6 cones ‘Eh, ane tT room ra 05 geegg HCA ‘Answer room 2k 4k 6k 8k 10k EEE Episode Reward room
2306.16803#38
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
38
Figure 6: Qualitative Comparison. We color the human inputs (H) in blue, LLaVA’s responses (L) in pink, and LLaVAR’s responses (R) in green. Both are 3362-based models. We replace the excessively long, hallucinated response with (...). results of baseline models and our models in Table 2. Note that InstructBLIP includes OCR-VQA in its training sets, making it incomparable with our settings. In both two resolutions and all four datasets, LLaVAR substantially improves the LLaVA baseline, demonstrating our collected data can bring robust improvement. Moreover, the improvement is more significant in the 3362 resolution compared to 2242, indicating the collected data might bring larger improvement in even higher resolutions. Our best model, 3362-based LLaVAR, performs best in 3 out of 4 datasets evaluated. Note that this is not a fair comparison. Some key different factors include different language decoders, different resolutions, and different magnitudes of text-image training data.
2306.17107#38
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
39
Figure 4: COCOA improves performance by disentangling subtasks. (A) Schematic representa- tion of the task interleaving environment where colored borders indicate the context of a room. (B) Performance of COCOA and baselines with learned contribution coefficients or value functions, mea- sured as the fraction of correct choices. (C) Visualization of the contribution coefficient magnitudes of each query room on reward rooms for COCOA (top) and HCA (bottom), c.f. App. E.7. drops for all baselines including HCA but remains relatively stable for the COCOA estimators. Hung et al. [4] showed that the SNR of REINFORCE decreases inversely proportional to growing distance between key and door. Figure 1B shows that HCA and all baselines follow this trend, whereas the COCOA estimators maintain a robust SNR.
2306.16803#39
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
39
Ablation Study We further report the result of the ablation study in Table 3 and Figure 4. (i) Based on variants (2) and (3), we find that collected data can benefit the pretraining stage (Rpretraining) and finetuning stage (Rfinetuning) separately while being complementary to each other in most cases 9. More importantly, enhancing the pretraining stage alone achieves the second-best overall performance, indicating the potential to boost textual detail understanding without dependence on GPT-4-generated high-quality data. (ii) Using pretraining images, we obtain Cpretraining by replacing the pretraining instructions with questions & captions, the same pattern as LLaVA. As variant (4) is not as good as (2), we can conclude that OCR is more advantageous than captions. (iii) We further validate the value of GPT-4 generated data by generating noisy finetuning data (Nfinetuning), similar to pretraining data. Variant (5) achieves comparable accuracy as variant (3). However, as shown in Figure 4, such noisy finetuning data hurts the instruction-following capability: (5) responds with all recognized texts while ignoring the questions. Overall, our ablation study confirms the necessity of our pipeline.
2306.17107#39
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
40
We can explain the qualitative difference between COCOA and other methods by observing that the key-to-door task consists of two distinct subtasks: picking up the key to get the treasure, and collecting apples. COCOA can quickly learn that actions relevant for one task, do not influence rewarding outcomes in the other task, and hence output a contribution coefficient equal to zero for those combinations. Value functions in contrast estimate the expected sum of future rewards, thereby mixing the rewards of both tasks. When increasing the variance of the return in the distractor task by increasing the number of stochastic distractor rewards, estimating the value functions becomes harder, whereas estimating the contribution coefficients between state-action pairs and distinct rewarding objects remains of equal difficulty, showcasing the power of disentangling rewarding outcomes.
2306.16803#40
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
40
ScienceQA Starting from our pretrained LLaVAR (3362-based, without finetuning), we also report the results of further finetuning on the ScienceQA dataset [39] in Table 4, which is a multimodal multi-choice QA dataset covering diverse domains. Our motivation is that some images in this 9Since the metric only consider the recall, it might favor variant (2)(4)(5) due to their longer outputs. 9 104 r 107 _4- Lavar_224 \ sd» LLaVA_224 0.84 \ 0.8 4 \ \ F064 \ 2 0.6 4 £ \ i g \ g g 0.44 g 044 J 1 4 0.2 ye LLaVAR_336 ‘ 0.2 i ~. -¥- LLaVA_336 ~s 0.04 7 Vv 0.0 -| aiid tds tt ati - T T T T T ; i T 6 8 10 12 14 4 6 8 Pixels (Vertical) Pixels (Vertical) Figure 7: Case study of the recognizable font size. We plot the results for the 3362-based models on the left and the 2242-based models on the right.
2306.17107#40
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
41
To further showcase the power of disentangling subtasks, we consider a simplified version of the task interleaving environ- ment of Mesnard et al. [3] (c.f. Fig. 4A, App. E.7). Here, the agent is faced with a sequence of contextual bandit tasks, where the reward for a correct decision is given at a later point in time, together with an observation of the relevant context. The main credit assignment difficulty is to relate the reward and contextual observation to the correct previous contextual bandit task. Note that the variance in the return is now caused by the stochastic policy, imperfectly solving future tasks, and by stochastic state transitions, in contrast to the linear key-to-door environment where the variance is caused by stochastic dis- tractor rewards. Figure 4B shows that our COCOA algorithms outperform all baselines. The learned contribution coefficients of COCOA reward accurately capture that actions in one con- text only contribute to rewards in the same context as opposed to HCA that fails to disentangle the contributions (c.f. Fig- ure 4C). # 4.3 Learned credit assignment features allow to disentangle aliased rewards
2306.16803#41
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
41
Figure 7: Case study of the recognizable font size. We plot the results for the 3362-based models on the left and the 2242-based models on the right. dataset contain text descriptions and tables that require textual understanding within images. The LLaVAR model finetuned on ScienceQA achieves an average accuracy of 91.42%, better than LLaVA (90.92%), while the biggest improvement comes from natural science questions (+1.43%).
2306.17107#41
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
42
# 4.3 Learned credit assignment features allow to disentangle aliased rewards For COCOA-reward we use the scalar reward value to identify rewarding outcomes in the hindsight distribution, i.e. U = R. In cases where multiple rewarding outcomes yield an identical scalar reward value, the hindsight distribution cannot distin- guish between them and has to estimate a common hindsight 1 COCOMfeature —— HCA+ — COCOA-reward Advantage 08 hh o4«64k~C«wK C(<«‘éiKCOC*‘«AGK Episode Qcritic ~~ REINFORCE Treasure reward collected Figure 5: COCOA-features is ro- bust to reward aliasing. On a ver- sion of the linear key-to-door en- vironment where one of the dis- tractor reward values has the same magnitude as the treasure reward, COCOA-reward can no longer dis- tinguish between the distractor and treasure reward and as a result its performance decreases. COCOA- feature is robust to this manipula- tion since it relies on learned fea- tures to distinguish rewarding ob- jects. 9
2306.16803#42
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
42
GPT-4-based instruction-following evaluation Following [3], we report the GPT-4 evaluation results on instruction-following questions in Table 5. (i) Natural Images: 90 questions based on 30 COCO validation images, including three aspects: conversation, detail description, and complex reasoning. This aims at testing whether our collected data will hurt, maintain or improve the model’s performance on natural images. First of all, using a higher resolution brings improvement (+2.9) in the performance of detail description, which is intuitive. Furthermore, LLaVAR achieves a better trade-off and increases the performance of all three aspects (+1.6 on average). (ii) Text-Rich Images: Similar to collecting the finetuning data, we leverage 50 text-rich images from LAION to collect instruction-following questions based on OCR results and human annotation. We then collect responses from our trained model and use GPT-4 to calculate the relative score w.r.t GPT-4 responses. We provide an example in Table 6 and add this as an extra dimension “Read” to the GPT-4-based evaluation Table 5. Our model demonstrates a more significant (+3.8) improvement on this axis. # 5.2 Qualitative Analysis
2306.17107#42
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
43
9 probability, making it impossible to disentangle the tasks and hence rendering learning of the contri- bution coefficients potentially more difficult. In contrast, COCOA-feature learns hindsight features of rewarding objects that are predictive of the reward. Even when multiple rewarding objects lead to an identical scalar reward value their corresponding features are likely different, allowing COCOA-feature to disentangle the rewarding outcomes. In Fig. 5, we test this reward aliasing setting experimentally and slightly modify the linear key-to-door environment by giving the treasure reward the same value as one of the two possible values of the stochastic distractor rewards. As expected, COCOA-feature is robust to reward aliasing, continuing to perform well on the task of picking up the treasure while performance of COCOA-reward noticeably suffers. Note that the performance of all methods has slightly decreased compared to Fig. 3, as the magnitude of the treasure reward is now smaller relative to the variance of the distractor rewards, resulting in a worse SNR for all methods. # 5 Discussion We present a theory for model-based credit assignment compatible with discrete actions and show in a comprehensive theoretical and experimental analysis that this yields a powerful policy gradient estimator, enabling long-term credit assignment by disentangling rewarding outcomes.
2306.16803#43
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
43
# 5.2 Qualitative Analysis We use a recent movie poster 10 to demonstrate the difference between LLaVA and LLaVAR re- garding interaction with humans based on text-rich images. LLaVA, without augmenting the textual understanding within images, suffers from hallucination while answering these questions. Some mentioned movies, like “A Man Called Ove” and “The Ugly Truth”, are real movies, suggesting the language decoder is hallucinating its internal knowledge while the visual encoder cannot encode helpful information. Alternatively, LLaVAR can correctly answer many of the provided questions with faithful information, which is clearly grounded in the image. However, there are still some limitations, such as the spelling error “ottol”. Also, the final question asks for information that is not observable from the given poster, where an expected response should express such uncertainty instead of giving concrete answers. However, both models fail to answer it correctly. # 5.3 Case Study: Recognizable Font Size
2306.17107#43
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
44
Building upon HCA [1], we focus on amortizing the estimation of the contribution coefficients in an inverse dynamics model, pπ(a | s, u′). The quality of this model is crucial for obtaining low-bias gradient estimates, but it is restricted to learn from on-policy data, and rewarding observations in case of u = r. Scaling these inverse models to complex environments will potentially exacerbate this tension, especially in sparse reward settings. A promising avenue for future work is to leverage forward dynamics models and directly estimate contribution coefficients from synthetic trajectories. While learning a forward model is a difficult problem in its own, its policy independence increases the data available for learning it. This would result in an algorithm close in spirit to Stochastic Value Gradients [20] and Dreamer [21–23] with the crucial advance that it enables model-based credit assignment on discrete actions. Another possibility to enable learning from non-rewarding observations is to learn a generative model that can recombine inverse models based on state representations into reward contributions (c.f. App. H).
2306.16803#44
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
44
# 5.3 Case Study: Recognizable Font Size By scaling the poster in Figure 6, we provide a case study on the recognizable font size on the top of the question, “When will this movie be released?”. We calculate the number of vertical pixels for the ground truth “January 13th” in the scaled posters and estimate the accuracy for each scale based on ten trials (Fig 7). (i) For our model LLaVAR, it can no longer recognize the ground truth while its vertical size is less than 6 pixels. Meanwhile, the 3362-based version provides better robustness as it works consistently well for any scale greater than 6 pixels. (ii) For the baseline model LLaVA, surprisingly, it achieves a certain level of correctness while the ground truth is between 8 and 10 pixels # 10https://www.imdb.com/title/tt7405458/ 10 with poor performance on larger scales (e.g., 14 pixels). This suggests that LLaVA, without specific training to recognize texts, still recognizes texts at specific scales with particular contexts. However, the lack of robustness prevents it from better performance in understanding text-rich images. # 5.4 Transferred Instruction-following Capability
2306.17107#44
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
45
Related work has explored the credit assignment problem through the lens of transporting rewards or value estimates towards previous states to bridge long-term dependencies [4, 5, 32, 35–41]. This approach is compatible with existing and well-established policy gradient estimators but determining how to redistribute rewards has relied on heuristic contribution analyses, such as via the access of memory states [4], linear decompositions of rewards [32, 35–39] or learned sequence models [5, 40, 41]. Leveraging our unbiased contribution analysis framework to reach more optimal reward transport is a promising direction for future research. While we have demonstrated that contribution coefficients with respect to states as employed by HCA suffer from spurious contributions, any reward feature encoding that is fully predictive of the reward can in principle suffer from a similar problem in the case where each environment state has a unique reward value. In practice, this issue might occur in environments with continuous rewards. A potential remedy in this situation is to assume that the underlying reward distribution is stochastic, smoothing the contribution coefficients as now multiple states could have led to the same reward. This lowers the variance of the gradient estimator as we elaborate in App. G.
2306.16803#45
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
45
# 5.4 Transferred Instruction-following Capability According to the dataset statistics (Table 1) and visualization (Figure 10), our collected instruction- following data is not as diverse and substantial as LLaVA. This can be attributed to the relatively limited information given GPT-4 compared to five different human-written captions used in LLaVA. The content of text-rich images is also less diverse than natural images. While using more complex in-context examples can definitely stimulate generating more complicated instruction-following examples, it can also multiply the cost. In Figure 5, we demonstrate the transferred instruction- following capability of LLaVA, potentially from both the LLaVA data and the Vicuna backbone. While the extra data we add mainly focuses on understanding the visible texts within images, LLaVAR manages to build its reasoning, writing, and elaboration skills based on the top of its text recognition capability in an end-to-end manner. This allows users to interact with various online content based on simple screenshots. # 6 Conclusion
2306.17107#45
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
46
Finally, we note that our contribution coefficients are closely connected to causality theory [42] where the contribution coefficients correspond to performing Do-interventions on the causal graph to estimate their effect on future rewards (c.f. App I). Within causality theory, counterfactual reasoning goes a step further by inferring the external, uncontrollable environment influences and considering the consequences of counterfactual actions given that all external influences remain the same [3, 20, 42–44]. Extending COCOA towards this more advanced counterfactual setting by building upon recent work [3, 43] is an exciting direction for future research (c.f. App. I). Concluding remarks. By overcoming the failure mode of spurious contributions in HCA, we have presented here a comprehensive theory on how to leverage model information for credit assignment, compatible with discrete action spaces. COCOA-reward and COCOA-feature are promising first algorithms in this framework, opening the way towards sample-efficient reinforcement learning by model-based credit assignment. 10 # 6 Acknowledgements
2306.16803#46
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
46
# 6 Conclusion In this work, we enhance visual instruction-tuned models in terms of their capability to read texts in images. Using text-rich images from the LAION dataset, we collect 422K noisy instruction-following examples using OCR results only and 16K high-quality instruction-following data based on text-only GPT-4. These two sets of data are leveraged to augment the pretraining stage and finetuning stage of LLaVA accordingly. Our model, LLaVAR, demonstrates superior performance in understanding texts within images and following human instructions on both prior benchmarks and real-world online content. Moreover, our analysis shows that the same augmented data is more effective with higher resolution. Also, using noisy instruction-following examples to augment pretraining essentially boosts the model performance without prompting GPT-4. For future work, we encourage exploration of (i) better image selection criteria or domain reweighting strategy [46] and (ii) more data-efficient and cost-efficient ways to enhance multimodal instruction-following datasets. # References
2306.17107#46
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
47
10 # 6 Acknowledgements We thank Angelika Steger, Yassir Akram, Ida Momennejad, Blake Richards, Matt Botvinick and Joel Veness for discussions and feedback. Simon Schug is supported by the Swiss National Science Foun- dation (PZ00P3_186027). Seijin Kobayashi is supported by the Swiss National Science Foundation (CRSII5_173721). Simon Schug would like to kindly thank the TPU Research Cloud (TRC) program for providing access to Cloud TPUs from Google. # References [1] Anna Harutyunyan, Will Dabney, Thomas Mesnard, Mohammad Gheshlaghi Azar, Bilal Piot, Nicolas Heess, Hado P van Hasselt, Gregory Wayne, Satinder Singh, Doina Precup, and Remi Munos. Hindsight Credit Assignment. In H. Wallach, H. Larochelle, A. Beygelzimer, F Alché- Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 12488–12497. Curran Associates, Inc., 2019. [2] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning, second edition: An Introduc- tion. MIT Press, November 2018.
2306.16803#47
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
47
# References [1] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022. [2] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models, 2022.
2306.17107#47
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
48
[2] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning, second edition: An Introduc- tion. MIT Press, November 2018. [3] Thomas Mesnard, Théophane Weber, Fabio Viola, Shantanu Thakoor, Alaa Saade, Anna Harutyunyan, Will Dabney, Tom Stepleton, Nicolas Heess, Arthur Guez, Marcus Hutter, Lars Buesing, and Rémi Munos. Counterfactual Credit Assignment in Model-Free Reinforcement Learning. In Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. [4] Chia-Chun Hung, Timothy Lillicrap, Josh Abramson, Yan Wu, Mehdi Mirza, Federico Carnevale, Arun Ahuja, and Greg Wayne. Optimizing agent behavior over long time scales by transporting value. Nature Communications, 10(1):5223, November 2019. Number: 1 Publisher: Nature Publishing Group.
2306.16803#48
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
48
[3] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. [4] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023. [5] Chunyuan Li. Large multimodal models: Notes on cvpr 2023 tutorial. ArXiv, abs/2306.14895, 2023. [6] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020. 11
2306.17107#48
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
49
[5] Jose A. Arjona-Medina, Michael Gillhofer, Michael Widrich, Thomas Unterthiner, Johannes Brandstetter, and Sepp Hochreiter. RUDDER: Return Decomposition for Delayed Rewards. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d extquotesingle Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 13566–13577. Curran Associates, Inc., 2019. [6] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforce- ment learning. Machine Learning, 8(3):229–256, May 1992. [7] Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning. Journal of Machine Learning Research, 5(Nov): 1471–1530, 2004. [8] Théophane Weber, Nicolas Heess, Lars Buesing, and David Silver. Credit Assignment Tech- niques in Stochastic Computation Graphs. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, pages 2650–2660. PMLR, April 2019. ISSN: 2640-3498.
2306.16803#49
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
49
11 [7] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021. [8] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts, 2021. [9] Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollár. Microsoft coco: Common objects in context, 2015. [10] Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Cheng lin Liu, Lianwen Jin, and Xiang Bai. On the hidden mystery of ocr in large multimodal models, 2023.
2306.17107#49
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
50
[9] Jonathan Baxter and Peter L. Bartlett. Infinite-Horizon Policy-Gradient Estimation. Journal of Artificial Intelligence Research, 15:319–350, November 2001. [10] Philip Thomas. Bias in Natural Actor-Critic Algorithms. In Proceedings of the 31st International Conference on Machine Learning, pages 441–448. PMLR, January 2014. ISSN: 1938-7228. [11] Sham Kakade. Optimizing Average Reward Using Discounted Rewards. In David Helmbold and Bob Williamson, editors, Computational Learning Theory, Lecture Notes in Computer Science, pages 605–615, Berlin, Heidelberg, 2001. Springer. [12] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- Dimensional Continuous Control Using Generalized Advantage Estimation. In International Conference on Learning Representations. arXiv, 2016. arXiv:1506.02438 [cs]. [13] Peter Marbach and John Tsitsiklis. Approximate Gradient Methods in Policy-Space Optimiza- tion of Markov Reward Processes. Discrete Event Dynamic Systems, page 38, 2003. 11
2306.16803#50
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
50
[11] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. Llama-adapter v2: Parameter-efficient visual instruction model, 2023. [12] OpenAI. Gpt-4 technical report, 2023. [13] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion- 5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402, 2022.
2306.17107#50
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
51
11 [14] OpenAI, Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław D˛ebiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique P. d O. Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, and Susan Zhang. Dota 2 with Large Scale Deep Reinforcement Learning, December 2019. arXiv:1912.06680 [cs, stat]. [15] OpenAI, Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, Jonas Schneider, Nikolas Tezak, Jerry Tworek, Peter Welinder, Lilian Weng, Qiming Yuan, Wojciech Zaremba, and Lei Zhang. Solving Rubik’s Cube with a Robot Hand, October 2019. arXiv:1910.07113 [cs, stat].
2306.16803#51
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
51
[14] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, and Daniel Khashabi. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks, 2022.
2306.17107#51
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
52
[16] Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKin- ney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, and David Silver. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575(7782):350–354, November 2019. Number: 7782 Publisher: Nature Publishing Group.
2306.16803#52
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
52
[15] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instruc- tions, 2022. [16] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. [17] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
2306.17107#52
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
53
[17] Richard S. Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM SIGART Bulletin, 2(4):160–163, July 1991. [18] Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. Sample- Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion. In 32nd Confer- ence on Neural Information Processing Systems. arXiv, 2018. arXiv:1807.01675 [cs, stat]. [19] Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. Model-Ensemble Trust-Region Policy Optimization. 2018. [20] Nicolas Heess, Gregory Wayne, David Silver, Timothy Lillicrap, Tom Erez, and Yuval Tassa. Learning Continuous Control Policies by Stochastic Value Gradients. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. [21] Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to Control: Learning Behaviors by Latent Imagination. In International Conference on Learning Represen- tations, March 2020. Number: arXiv:1912.01603 arXiv:1912.01603 [cs].
2306.16803#53
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
53
[18] Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. Baize: An open-source chat model with parameter-efficient tuning on self-chat data, 2023. [19] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023. [20] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. G-eval: Nlg evaluation using gpt-4 with better human alignment, 2023. [21] Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators, 2023. 12
2306.17107#53
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
54
[22] Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering Atari with Discrete World Models. In International Conference on Learning Representations, 2021. Number: arXiv:2010.02193 arXiv:2010.02193 [cs, stat]. [23] Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering Diverse Domains through World Models, January 2023. arXiv:2301.04104 [cs, stat]. [24] Mikael Henaff, William F. Whitney, and Yann LeCun. Model-Based Planning with Discrete and Continuous Actions, April 2018. arXiv:1705.07177 [cs]. [25] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforce- ment Learning. In Proceedings of The 33rd International Conference on Machine Learning, pages 1928–1937. PMLR, June 2016. ISSN: 1938-7228.
2306.16803#54
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
54
12 [22] Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback, 2023. [23] Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023. [24] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models, 2023. [25] Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jia-Bin Huang, Jinglin Liu, Yixiang Ren, Zhou Zhao, and Shinji Watanabe. Audiogpt: Understanding and generating speech, music, sound, and talking head. ArXiv, abs/2304.12995, 2023.
2306.17107#54
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
55
[26] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Advances in Neural Information Processing Systems 12, 1999. [27] Chris Nota and Philip S Thomas. Is the Policy Gradient a Gradient? In Proc. of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020), page 9, 2020. 12 [28] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- Dimensional Continuous Control Using Generalized Advantage Estimation, October 2018. arXiv:1506.02438 [cs]. [29] Vyacheslav Alipov, Riley Simmons-Edler, Nikita Putintsev, Pavel Kalinin, and Dmitry Vetrov. Towards Practical Credit Assignment for Deep Reinforcement Learning, February 2022. arXiv:2106.04499 [cs]. [30] Peter Dayan. Improving Generalization for Temporal Difference Learning: The Successor Representation. Neural Computation, 5(4):613–624, July 1993. Conference Name: Neural Computation.
2306.16803#55
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
55
[26] Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities, 2023. [27] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhanc- ing vision-language understanding with advanced large language models, 2023. [28] Renrui Zhang, Jiaming Han, Chris Liu, Peng Gao, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention, 2023. [29] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng Tian, Qian Qi, Ji Zhang, and Fei Huang. mplug-owl: Modularization empowers large language models with multimodality, 2023.
2306.17107#55
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
56
[31] Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J. Gershman. Deep Successor Reinforcement Learning, June 2016. Number: arXiv:1606.02396 arXiv:1606.02396 [cs, stat]. [32] David Raposo, Sam Ritter, Adam Santoro, Greg Wayne, Theophane Weber, Matt Botvinick, Hado van Hasselt, and Francis Song. Synthetic Returns for Long-Term Credit Assignment. arXiv:2102.12425 [cs], February 2021. arXiv: 2102.12425. [33] John W. Roberts and Russ Tedrake. Signal-to-Noise Ratio Analysis of Policy Gradient Algo- rithms. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1361–1368. Curran Associates, Inc., 2009. [34] Ching-An Cheng, Xinyan Yan, and Byron Boots. Trajectory-wise Control Variates for Variance Reduction in Policy Gradient Methods. In Proceedings of the Conference on Robot Learning, pages 1379–1394. PMLR, May 2020. ISSN: 2640-3498.
2306.16803#56
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
56
[30] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Instructblip: Towards general-purpose Wang, Boyang Li, Pascale Fung, and Steven Hoi. vision-language models with instruction tuning, 2023. [31] Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. Ocr-vqa: Visual question answering by reading text in images. In 2019 international conference on document analysis and recognition (ICDAR), pages 947–952. IEEE, 2019. [32] Geewook Kim, Hodong Lee, Daehee Kim, Haeji Jung, Sanghee Park, Yoonsik Kim, Sangdoo Yun, Taeho Kil, Bado Lee, and Seunghyun Park. Cream: Visually-situated natural language understanding with contrastive reading model and frozen large language models, 2023.
2306.17107#56
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
57
[35] Zhizhou Ren, Ruihan Guo, Yuan Zhou, and Jian Peng. Learning Long-Term Reward Redistri- bution via Randomized Return Decomposition. January 2022. [36] Yonathan Efroni, Nadav Merlis, and Shie Mannor. Reinforcement Learning with Trajectory Feedback. In The Thirty-Fifth AAAI Conference on Artificial Intelligenc. arXiv, March 2021. [37] Minah Seo, Luiz Felipe Vecchietti, Sangkeum Lee, and Dongsoo Har. Rewards Prediction- Based Credit Assignment for Reinforcement Learning With Sparse Binary Rewards. IEEE ACCESS, 7:118776–118791, 2019. Accepted: 2019-09-24T11:21:52Z Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. [38] Markel Sanz Ausin, Hamoon Azizsoltani, Song Ju, Yeo Jin Kim, and Min Chi. InferNet for Delayed Reinforcement Tasks: Addressing the Temporal Credit Assignment Problem. In 2021 IEEE International Conference on Big Data (Big Data), pages 1337–1348, December 2021.
2306.16803#57
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
57
[33] Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, and Furu Wei. Dit: Self-supervised pre-training for document image transformer. Proceedings of the 30th ACM International Conference on Multimedia, Oct 2022. doi: 10.1145/3503161.3547911. URL http://dx.doi. org/10.1145/3503161.3547911. [34] Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis. Evaluation of deep convolutional nets for document image classification and retrieval, 2015. [35] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models, 2023.
2306.17107#57
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
58
[39] Hamoon Azizsoltani, Yeo Jin Kim, Markel Sanz Ausin, Tiffany Barnes, and Min Chi. Unob- served Is Not Equal to Non-existent: Using Gaussian Processes to Infer Immediate Rewards In Proceedings of the Twenty-Eighth International Joint Conference on Across Contexts. Artificial Intelligence, pages 1974–1980, 2019. [40] Vihang P. Patil, Markus Hofmarcher, Marius-Constantin Dinu, Matthias Dorfer, Patrick M. Blies, Johannes Brandstetter, Jose A. Arjona-Medina, and Sepp Hochreiter. Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution. Proceedings of Machine Learning Research, 162, 2022. arXiv: 2009.14108. [41] Johan Ferret, Raphaël Marinier, Matthieu Geist, and Olivier Pietquin. Self-Attentional Credit Assignment for Transfer in Reinforcement Learning. In Proceedings of the Twenty- Ninth International Joint Conference on Artificial Intelligence, pages 2655–2661, July 2020. arXiv:1907.08027 [cs]. [42] Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference: foun- dations and learning algorithms. Adaptive Computation and Machine Learning. November 2018.
2306.16803#58
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
58
[36] Anas Awadalla, Irena Gao, Joshua Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo, March 2023. URL https: //doi.org/10.5281/zenodo.7733589. [37] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. 13
2306.17107#58
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
59
[43] Lars Buesing, Theophane Weber, Yori Zwols, Sebastien Racaniere, Arthur Guez, Jean-Baptiste Lespiau, and Nicolas Heess. Woulda, Coulda, Shoulda: Counterfactually-Guided Policy Search. In International Conference on Learning Representations, 2019. arXiv: 1811.06272. 13 [44] Ioana Bica, Ahmed M. Alaa, James Jordon, and Mihaela van der Schaar. Estimating Counter- factual Treatment Outcomes over Time Through Adversarially Balanced Representations. In International Conference on Learning Representations. arXiv, February 2020. arXiv:2002.04083 [cs, stat]. [45] Pushi Zhang, Li Zhao, Guoqing Liu, Jiang Bian, Minlie Huang, Tao Qin, and Tie-Yan Liu. In Proceedings of the Thirtieth International Independence-aware Advantage Estimation. Joint Conference on Artificial Intelligence, pages 3349–3355, Montreal, Canada, August 2021. International Joint Conferences on Artificial Intelligence Organization. [46] Kenny Young. Variance Reduced Advantage Estimation with $\delta$ Hindsight Credit Assign- ment. arXiv:1911.08362 [cs], September 2020. arXiv: 1911.08362.
2306.16803#59
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
59
13 [38] Yushi Hu, Hang Hua, Zhengyuan Yang, Weijia Shi, Noah A. Smith, and Jiebo Luo. Promptcap: Prompt-guided task-aware image captioning, 2022. [39] Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering, 2022. [40] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multi- modal chain-of-thought reasoning in language models, 2023. [41] Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language models. ArXiv, abs/2304.09842, 2023. [42] Michael Tschannen, Basil Mustafa, and Neil Houlsby. Clippo: Image-and-language understand- ing from pixels only, 2022.
2306.17107#59
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
60
[47] Michel Ma and Bacon Pierre-Luc. Counterfactual Policy Evaluation and the Conditional Monte Carlo Method. In Offline Reinforcement Learning Workshop, NeurIPS, 2020. [48] Paul Bratley, Bennett L. Fox, and Linus E. Schrage. A Guide to Simulation. Springer, New York, NY, 1987. [49] J. M. Hammersley. Conditional Monte Carlo. Journal of the ACM, 3(2):73–76, April 1956. [50] Dilip Arumugam, Peter Henderson, and Pierre-Luc Bacon. An Information-Theoretic Perspec- tive on Credit Assignment in Reinforcement Learning. arXiv:2103.06224 [cs, math], March 2021. arXiv: 2103.06224. [51] Kenny Young. Hindsight Network Credit Assignment: Efficient Credit Assignment in Networks of Discrete Stochastic Units. Proceedings of the AAAI Conference on Artificial Intelligence, 36 (8):8919–8926, June 2022. Number: 8. [52] Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, and Sergey Levine. Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic. 2017.
2306.16803#60
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
60
[42] Michael Tschannen, Basil Mustafa, and Neil Houlsby. Clippo: Image-and-language understand- ing from pixels only, 2022. [43] Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marcal Rusinol, Minesh Mathew, C.V. Jawahar, Ernest Valveny, and Dimosthenis Karatzas. Icdar 2019 competition on scene text visual question answering. 2019 International Conference on Document Analysis and Recognition (ICDAR), Sep 2019. doi: 10.1109/icdar.2019.00251. URL http://dx.doi.org/ 10.1109/ICDAR.2019.00251. [44] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2019. doi: 10.1109/cvpr.2019.00851. URL http://dx.doi.org/10.1109/CVPR.2019.00851.
2306.17107#60
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
61
[53] Philip S. Thomas and Emma Brunskill. Policy Gradient Methods for Reinforcement Learning with Function Approximation and Action-Dependent Baselines, June 2017. arXiv:1706.06643 [cs]. [54] Hao Liu*, Yihao Feng*, Yi Mao, Dengyong Zhou, Jian Peng, and Qiang Liu. Action-dependent Control Variates for Policy Optimization via Stein Identity. February 2022. [55] Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M. Bayen, Sham Kakade, Igor Mordatch, and Pieter Abbeel. Variance Reduction for Policy Gradient with Action- Dependent Factorized Baselines. February 2022. [56] George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard Turner, Zoubin Ghahramani, and Sergey Levine. The Mirage of Action-Dependent Baselines in Reinforcement Learning. In Proceedings of the 35th International Conference on Machine Learning, pages 5015–5024. PMLR, July 2018. ISSN: 2640-3498.
2306.16803#61
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
61
[45] Minesh Mathew, Dimosthenis Karatzas, and C. V. Jawahar. Docvqa: A dataset for vqa on document images, 2020. [46] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, and Adams Wei Yu. Doremi: Optimizing data mixtures speeds up language model pretraining, 2023. [47] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Fei-Fei Li. Visual genome: Connecting language and vision using crowdsourced dense image annotations, 2016. [48] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large- scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009. doi: 10.1109/CVPR.2009.5206848.
2306.17107#61
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
62
[57] Chris Nota, Philip Thomas, and Bruno C. Da Silva. Posterior Value Functions: Hindsight Baselines for Policy Gradient Methods. In Proceedings of the 38th International Conference on Machine Learning, pages 8238–8247. PMLR, July 2021. ISSN: 2640-3498. [58] Arthur Guez, Fabio Viola, Theophane Weber, Lars Buesing, Steven Kapturowski, Doina Precup, David Silver, and Nicolas Heess. Value-driven Hindsight Modelling. Advances in Neural Information Processing Systems, 33, 2020. [59] David Venuto, Elaine Lau, Doina Precup, and Ofir Nachum. Policy Gradients Incorporating the Future. January 2022. [60] Jiawei Huang and Nan Jiang. From Importance Sampling to Doubly Robust Policy Gradient. In Proceedings of the 37th International Conference on Machine Learning, pages 4434–4443. PMLR, November 2020. ISSN: 2640-3498. [61] Andrew Y. Ng, Daishi Harada, and Stuart J. Russell. Policy Invariance Under Reward Trans- In Proceedings of the Sixteenth formations: Theory and Application to Reward Shaping. International Conference on Machine Learning, ICML ’99, pages 278–287, San Francisco, CA, USA, June 1999. Morgan Kaufmann Publishers Inc. 14
2306.16803#62
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
62
[49] Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin Murphy. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2017. [50] Brian Kenji Iwana, Syed Tahseen Raza Rizvi, Sheraz Ahmed, Andreas Dengel, and Seiichi Uchida. Judging a book by its cover, 2016. # Appendix # A CLIP-based categorization Based on the observation of selected clusters, we divide the images used into 8 categories. For each category, we use one or multiple words as labels. • Quote & Meme: “quote”, “internet meme”. 14 # Instructions
2306.17107#62
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
63
14 [62] Jürgen Schmidhuber. Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010). IEEE Transactions on Autonomous Mental Development, 2(3):230–247, September 2010. [63] Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying Count-Based Exploration and Intrinsic Motivation. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. [64] Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. December 2018. [65] Ofir Marom and Benjamin Rosman. Belief Reward Shaping in Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), April 2018. Number: 1. [66] Farzan Memarian, Wonjoon Goo, Rudolf Lioutikov, Scott Niekum, and Ufuk Topcu. Self-Supervised Online Reward Shaping in Sparse-Reward Environments, July 2021. arXiv:2103.04529 [cs].
2306.16803#63
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
63
• Quote & Meme: “quote”, “internet meme”. 14 # Instructions Identify any text visible in the image provided. List all the text you can see in the given image. Enumerate the words or sentences visible in the picture. Describe any readable text present in the image. Report any discernible text you see in the image. Share any legible words or sentences visible in the picture. Provide a list of texts observed in the provided image. Note down any readable words or phrases shown in the photo. Report on any text that can be clearly read in the image. Mention any discernable and legible text present in the given picture. Table 7: Ten instructions asking for OCR results.
2306.17107#63
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
64
[67] Halit Bener Suay, Tim Brys, Matthew E. Taylor, and Sonia Chernova. Learning from Demon- stration for Shaping through Inverse Reinforcement Learning. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, AAMAS ’16, pages 429–437, Richland, SC, May 2016. International Foundation for Autonomous Agents and Multiagent Systems. Shaping Rewards for Reinforce- ment Learning with Imperfect Demonstrations using Generative Models, November 2020. arXiv:2011.01298 [cs]. [69] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient Estimation Using Stochastic Computation Graphs. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. [70] Michel Ma, Pierluca D’Oro, Yoshua Bengio, and Pierre-Luc Bacon. Long-Term Credit As- In Deep RL Workshop NeurIPS, October signment via Model-based Temporal Shortcuts. 2021.
2306.16803#64
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
64
Table 7: Ten instructions asking for OCR results. Poster: “movie poster”, “podcast poster”, “TV show poster”, “event poster”, “poster”, • Book Cover: “book cover”, “magazine cover”. • Game Cover: “game cover”. • Ad & Product Packaging: “ad”, “advertisement”, “food packaging”, “product packaging”. • Infographic: “chart”, “bar chart”, “pie chart”, “scatter plot”. • Educational Material: “ad”, “advertisement”, “food packaging”, “product packaging”. • Logo: “logo”. For each word, we use the following templates to achieve embedding-space ensembling [7]:
2306.17107#64
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
65
[71] Nan Rosemary Ke, Anirudh Goyal ALIAS PARTH GOYAL, Olexa Bilaniuk, Jonathan Binas, Michael C Mozer, Chris Pal, and Yoshua Bengio. Sparse Attentive Backtracking: Temporal Credit Assignment Through Reminding. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7640–7651. Curran Associates, Inc., 2018. [72] Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal Value Function Approximators. In Proceedings of the 32nd International Conference on Machine Learning, pages 1312–1320. PMLR, June 2015. ISSN: 1938-7228. [73] Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight Experience Replay. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
2306.16803#65
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
65
For each word, we use the following templates to achieve embedding-space ensembling [7]: “a photo of a {}.” • “a blurry photo of a {}.” • “a black and white photo of a {}.” • “a low contrast photo of a {}.” • “a high contrast photo of a {}.” • “a bad photo of a {}.” • “a good photo of a {}.” • “a photo of a small {}.” • “a photo of a big {}.” For each image, we calculate the similarity between the image and all words mentioned above using CLIP-ViT-L/14. If the highest similarity is less than 0.15, we then classify the image into Other, otherwise we classify into the “super class” (e.g., Poster) of the word (e.g., “movie poster”) with the highest similarity. # B System Message adapted from [3].
2306.17107#65
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
66
[74] Paulo Rauber, Avinash Ummadisingu, Filipe Mutz, and Jürgen Schmidhuber. Hindsight policy gradients. December 2018. [75] Anirudh Goyal, Philemon Brakel, William Fedus, Soumye Singhal, Timothy Lillicrap, Sergey Levine, Hugo Larochelle, and Yoshua Bengio. Recall Traces: Backtracking Models for Efficient Reinforcement Learning. In International Conference on Learning Representations. arXiv, January 2019. arXiv:1804.00379 [cs, stat]. [76] Juergen Schmidhuber. Reinforcement Learning Upside Down: Don’t Predict Rewards – Just Map Them to Actions, June 2020. arXiv:1912.02875 [cs]. [77] Rupesh Kumar Srivastava, Pranav Shyam, Filipe Mutz, Wojciech Ja´skowski, and Jürgen Schmidhuber. Training Agents using Upside-Down Reinforcement Learning, September 2021. arXiv:1912.02877 [cs]. 15
2306.16803#66
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
66
# B System Message adapted from [3]. You are an AI visual assistant, and you are seeing a single image. What you see is provided with two OCR results and one image caption describing the information within the same image you are looking at. Image captions might include hallucinations, while OCR results are more accurate. Answer all questions with definite answers as you are seeing the image. Design a conversation between you and a person asking about this photo. The answers should be in a tone that a visual AI assistant is seeing the image and answering the question. Ask diverse questions and give corresponding answers. 15
2306.17107#66
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
67
15 [78] Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision Transformer: Reinforcement Learning via Sequence Modeling. In Advances in Neural Information Processing Systems, volume 34, pages 15084–15097. Curran Associates, Inc., 2021. [79] Michael Janner, Qiyang Li, and Sergey Levine. Offline Reinforcement Learning as One Big Sequence Modeling Problem. In Advances in Neural Information Processing Systems, volume 34, pages 1273–1286. Curran Associates, Inc., 2021. [80] Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3(1):9–44, August 1988. [81] Hado van Hasselt, Sephora Madjiheurem, Matteo Hessel, David Silver, André Barreto, and Diana Borsa. Expected Eligibility Traces. In Association for the Advancement of Artificial Intelligence. arXiv, February 2021. arXiv:2007.01839 [cs, stat].
2306.16803#67
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
67
15 Include questions asking about the visual content of the image (e.g., the man, the sunset, the ocean.) and the texts contained in the image. Only include questions that have definite answers: (1) one can see the content in the image that the question asks about and can answer confidently; (2) one can determine confidently from the image that it is not in the image. Do not ask any questions that cannot be answered confidently; (3) DO NOT mention OCR or image caption in your questions and answers; (4) DO NOT ask about information from captions while it looks unrelated to or contra- dicts OCR results. Also include complex questions that are relevant to the content in the image, for example, asking about background knowledge of the texts in the image, asking to discuss about the design of the image, etc. Again, do not ask about uncertain details. Provide detailed answers when answering complex questions. For example, give detailed examples or reasoning steps to make the content more convincing and well-organized. You can include multiple paragraphs if necessary. # Few-shot Examples
2306.17107#67
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
68
[82] Igor Babuschkin, Kate Baumli, Alison Bell, Surya Bhupatiraju, Jake Bruce, Peter Buchlovsky, David Budden, Trevor Cai, Aidan Clark, Ivo Danihelka, Antoine Dedieu, Claudio Fantacci, Jonathan Godwin, Chris Jones, Ross Hemsley, Tom Hennigan, Matteo Hessel, Shaobo Hou, Steven Kapturowski, Thomas Keck, Iurii Kemaev, Michael King, Markus Kunesch, Lena Martens, Hamza Merzic, Vladimir Mikulik, Tamara Norman, George Papamakarios, John Quan, Roman Ring, Francisco Ruiz, Alvaro Sanchez, Rosalia Schneider, Eren Sezener, Stephen Spencer, Srivatsan Srinivasan, Wojciech Stokowiec, Luyu Wang, Guangyao Zhou, and Fabio Viola. The DeepMind JAX Ecosystem, 2020. [83] Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep Variational Information Bottleneck. 2017. [84] Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck method, April 2000. arXiv:physics/0004057.
2306.16803#68
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
68
# Few-shot Examples Human: WHAT F YOUR BLESSINGS COME THROUGH RAINDROPS PDf Induded LAURAS- TOR Y DEVOHONAL BASED ON THE GRAAAMY VARD WIN ISONG mAT HAS INSEIRED MILLION WHATIFYOUR BLESSINGS COMETHROUGHRAINDROPS PDF Included LAURASTORY A DEVOTIONAL BASEDON THE GRAMMY AWARD WINNI PISONGTHATHASINSPIREDMILLIONSE a girl is standing in a field with a rainbow GPT: Question: What is the name of the devotional mentioned in the image? Answer: The devotional is called “What If Your Blessings Come Through Raindrops” by Laura Story. Question: What is special about the song mentioned in the image? Answer: The song is special because it’s a Grammy Award-winning song that has inspired millions of people."
2306.17107#68
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
69
[85] Matthew Hausknecht and Peter Stone. Deep Recurrent Q-Learning for Partially Observ- In Association for the Advancement of Artificial Intelligence. arXiv, 2015. able MDPs. arXiv:1507.06527 [cs]. [86] Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning Latent Dynamics for Planning from Pixels. In Proceedings of the 36th International Conference on Machine Learning, pages 2555–2565. PMLR, May 2019. ISSN: 2640-3498. [87] Karol Gregor, Danilo Jimenez Rezende, Frederic Besse, Yan Wu, Hamza Merzic, and Aaron van den Oord. Shaping Belief States with Generative Environment Models for RL. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019). arXiv, June 2019. Number: arXiv:1906.09237 arXiv:1906.09237 [cs, stat]. [88] Karol Gregor, George Papamakarios, Frederic Besse, Lars Buesing, and Theophane Weber. Temporal Difference Variational Auto-Encoder. 2019.
2306.16803#69
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
69
Human: One of the hardest things in life to accept is a called third strike Robert Frost te One of the hardest things in life to accept is a called third strike Robert Frost quotefancy a close up of a baseball glove GPT: Question: Why is the third strike hard to accept? Explain the quote to me. Answer: The quote Öne of the hardest things in life to accept is a called third strikeät- tributed to Robert Frost is a metaphor for life situations. In baseball, a called third strike is when the umpire determines that a pitch was a strike, but the batter did not swing. This can be frustrating for the batter because they missed an opportunity to hit the ball due to either a lack of confidence or misjudgment. In life, this metaphor refers to difficult moments where we might miss opportunities or face setbacks because of our own inaction or hesitation. The quote suggests that it is hard to accept these missed opportunities or challenges, especially when they result from our own choices or lack of action. # C # Details of evaluation dataset 16 • ST-VQA [43] contains 31791 questions that require understanding the scene text, based on images from COCO [9], Visual Genome [47], ImageNet [48], etc.
2306.17107#69
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
70
[89] Matthijs T J Spaan. Partially Observable Markov Decision Processes. Reinforcement Learning, page 27. [90] K. J. Astrom. Optimal Control of Markov decision processes with incomplete state estimation. J. Math. Anal. Applic., 10:174–205, 1965. [91] Edward C. Tolman. Cognitive maps in rats and men. Psychological Review, 55:189–208, 1948. Place: US Publisher: American Psychological Association. [92] Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. Fixing a Broken ELBO. In Proceedings of the 35 th International Conference on Machine Learning,. arXiv, February 2018. arXiv:1711.00464 [cs, stat]. [93] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. July 2022. 16 [94] Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. arXiv, May 2014. Number: arXiv:1312.6114 arXiv:1312.6114 [cs, stat].
2306.16803#70
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
70
• TextVQA [44] contains 45,336 questions that need reading and reasoning about the text in images to answer, based on images from OpenImages [49]. • OCR-VQA [31] contains more than 1 million questions asking about information from book cover images [50]. • DocVQA [45] contains 50000 questions based on document images. 17 ‘ierwon MARKETING 7 EAT YOUR VEGETABLES DAY anoint — Panpiant : FPanoianr STATE OF EYE ON EYE ON THE HACK SECURITY S SECURITY TECHNADO TECHNADO “DON PEZET ~*DOM PEZET a a A Oo = 1 al
2306.17107#70
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
71
[95] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. [96] Lukas Biewald. Experiment Tracking with Weights and Biases, 2020. [97] Plotly Technologies Inc. Collaborative data science, 2015. Place: Montreal, QC Publisher: Plotly Technologies Inc. 17 # Supplementary Materials # Table of Contents A Related work
2306.16803#71
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.17107
71
‘ierwon MARKETING 7 EAT YOUR VEGETABLES DAY anoint — Panpiant : FPanoianr STATE OF EYE ON EYE ON THE HACK SECURITY S SECURITY TECHNADO TECHNADO “DON PEZET ~*DOM PEZET a a A Oo = 1 al Figure 8: All 14 clusters we selected as text-rich images. Each row corresponds to one cluster, where we show ten randomly sampled examples before de-duplication. 18 appearance %. wD. i) Figure 9: Visualization of collected instructions. 19 Joyeieu text oO g 42 30° as title Figure 10: Visualization of collected instructions. 20 READ BY JOHN H. MAYER Figure 11: An example for the Read dimension of GPT-4-based instruction-following evaluation. 21
2306.17107#71
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans. Furthermore, recent instruction-following datasets include images as visual inputs, collecting responses for image-based instructions. However, visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first use publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Moreover, we prompt text-only GPT-4 with recognized texts and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multi-modal instruction-following data, our model, LLaVAR, substantially improves the LLaVA model's capability on text-based VQA datasets (up to 20% accuracy improvement) while achieving an accuracy of 91.42% on ScienceQA. The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction (e.g., reasoning, writing, and elaboration) skills with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available at https://llavar.github.io/.
http://arxiv.org/pdf/2306.17107
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
cs.CV, cs.CL
Preprint. Work in progress
null
cs.CV
20230629
20230629
[ { "id": "2306.02858" }, { "id": "2210.08402" }, { "id": "2305.03726" } ]
2306.16803
72
B Undiscounted infinite-horizon MDPs C Theorems, proofs and additional information for Section 3 . C.1 Contribution coefficients, hindsight distribution and graphical models . C.2 Proof Theorem 1 . . . . C.3 Different policy gradient estimators leveraging contribution coefficients. . . . C.4 Proof of Proposition 2 . . . . C.5 Proof Theorem 3 . . . . . C.6 Proof of Theorem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D Learning the contribution coefficients . . . . . . . . . . . . . . D.1 Proof Proposition 5 . . . . D.2 Learning contribution coefficients via contrastive classification. . . D.3 Successor representations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2306.16803#72
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
73
Successor representations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E Experimental details and additional results . . . . E.1 Algorithms . . . E.2 Dynamic programming setup . . . E.3 Bias, variance and SNR metrics . E.4 Linear key-to-door environment setup . . . E.5 Reward switching setup . . E.6 Tree environment setup . . . E.7 Task interleaving environment setup . . . E.8 Training details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2306.16803#73
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
74
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F Additional results F.1 HCA vs HCA+ . . . F.2 Learned credit assignment features allow for quick adaptation to a change of the . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Investigation into the required accuracy of the hindsight models . . . . reward function. . . F.3 Using returns instead of rewards as hindsight information . F.4 . . . . . . . . . . . . . . . . . . . . . . . G Contribution analysis in continuous spaces and POMDPs G.1 Spurious contributions in continuous state spaces . . . G.2 Deterministic continuous reward functions can lead to excessive variance . . G.3 Smoothing can alleviate excess variance by trading variance for bias . . . . G.4 Continuous action spaces . . . . G.5 Partially Observable MDPs . . . . . . . . . . . . . .
2306.16803#74
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
75
. . . G.4 Continuous action spaces . . . . G.5 Partially Observable MDPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H Learning contribution coefficients from non-rewarding observations . . H.1 Latent learning . . . . . . H.2 Optimal rewarding outcome encodings for credit assignment. H.3 Counterfactual reasoning on rewarding states . . . . H.4 Learning credit assignment representations with an information bottleneck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2306.16803#75
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
78
1 2 28 28 I.2 Extending COCOA to counterfactual interventions . . . . . . . . . . . . . . . . J Contribution analysis with temporal discounting K Bootstrapping with COCOA L HCA-return is a biased estimator in many relevant environments . L.1 HCA-return . . . . L.2 HCA-return can be biased . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M Additional details . M.1 Author contributions . M.2 Compute resources . . M.3 Software and libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 30 30 33 33 33 34 34 34 34 # A Related work
2306.16803#78
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
79
# A Related work Our work builds upon Hindsight Credit Assignment (HCA) [1] which has sparked a number of follow-up studies. We generalize the theory of HCA towards estimating contributions upon rewarding outcomes instead of rewarding states and show through a detailed variance analysis that HCA suffers from spurious contributions leading to high variance, while using rewards or rewarding objects as rewarding outcome encodings leads to low-variance gradient estimators. Follow-up work on HCA discusses its potential high variance in constructed toy settings, and reduces the variance of the HCA advantage estimates by combining it with Monte Carlo estimates [45] or using temporal difference errors instead of rewards [46]. Alipov et al. [29] leverages the latter approach to scale up HCA towards more complex environments. However, all of the above approaches still suffer from spurious contributions, a significant source of variance in the HCA gradient estimator. In addition, recent studies have theoretically reinterpreted the original HCA formulation from different angles: Ma and Pierre-Luc [47] create the link to Conditional Monte Carlo methods [48, 49] and Arumugam et al. [50] provide an information theoretic perspective on credit assignment. Moreover, Young [51] applies HCA to the problem of estimating gradients in neural networks with stochastic, discrete units.
2306.16803#79
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
80
Long-term credit assignment in RL is hard due to the high variance of the sum of future rewards for long trajectories. A common technique to reduce the resulting variance in the policy gradients is to subtract a baseline from the sum of future rewards [6–9]. To further reduce the variance, a line of work introduced state-action-dependent baselines [52–55]. However, Tucker et al. [56] argues that these methods offer only a small benefit over the conventional state-dependent baselines, as the current action often only accounts for a minor fraction of the total variance. More recent work proposes improved baselines by incorporating hindsight information about the future trajectory into the baseline, accounting for a larger portion of the variance [1, 3, 34, 57–60].Cheng et al. [34] includes Q-value estimates of future state-action pairs into the baseline, and Huang and Jiang [60] goes one step further by also including cheap estimates of the future Q-value gradients.Mesnard et al. [3] learn a summary metric of the uncontrollable, external environment influences in the future trajectory, and provide this hindsight information as an extra input to the value
2306.16803#80
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
81
learn a summary metric of the uncontrollable, external environment influences in the future trajectory, and provide this hindsight information as an extra input to the value baseline. Nota et al. [57] consider partially observable MDPs, and leverage the future trajectory to more accurately infer the current underlying Markov state, thereby providing a better value baseline. Harutyunyan et al. [1] propose return-HCA, a different variant of HCA that uses a return-conditioned hindsight distribution to construct a baseline, instead of using state-based hindsight distributions for estimating contributions. Finally, Guez et al. [58] and Venuto et al. [59] learn a summary representation of the full future trajectory and provide it as input to the value function, while imposing an information bottleneck to prevent the value function from overly relying on this hindsight information.
2306.16803#81
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
82
Environments with sparse rewards and long delays between actions and corresponding rewards put a high importance on long-term credit assignment. A popular strategy to circumvent the sparse and delayed reward setting is to introduce reward shaping [61–68]. These approaches add auxiliary rewards to the sparse reward function, aiming to guide the learning of the policy with dense rewards. A recent line of work introduces a reward-shaping strategy specifically designed for long-term credit assignment, where rewards are decomposed and distributed over previous state-action pairs that were instrumental in achieving that reward [4, 5, 32, 35–41]. To determine how to redistribute rewards, 2 these approaches rely on heuristic contribution analyses, such as via the access of memory states [4], linear decompositions of rewards [32, 35–39] or learned sequence models [5, 40, 41]. Leveraging our unbiased contribution analysis framework to reach more optimal reward transport is a promising direction for future research.
2306.16803#82
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
83
When we have access to a (learned) differentiable world model of the environment, we can achieve precise credit assignment by leveraging path-wise derivatives, i.e. backpropagating value gradients through the world model [20–24, 69]. For stochastic world models, we need access to the noise variables to compute the path-wise derivatives. The Dreamer algorithms [21–23] approach this by computing the value gradients on simulated trajectories, where the noise is known. The Stochastic Value Gradient (SVG) method [20] instead infers the noise variables on real observed trajectories. To enable backpropagating gradients over long time spans, Ma et al. [70] equip the learned recurrent world models of SVG with an attention mechanism, allowing the authors to leverage Sparse Attentive Backtracking [71] to transmit gradients through skip connections. Buesing et al. [43] leverages the insights from SVG in partially observable MDPs, using the inferred noise variables to estimate the effect of counterfactual policies on the expected return. Importantly, the path-wise derivatives leveraged by the above model-based credit assignment methods are not compatible with discrete action spaces, as sensitivities w.r.t. discrete actions are undefined. In contrast, COCOA can leverage model-based information for credit assignment, while being compatible with discrete actions.
2306.16803#83
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
84
Incorporating hindsight information has a wide variety of applications in RL. Goal-conditioned policies [72] use a goal state as additional input to the policy network or value function, thereby generalizing it to arbitrary goals. Hindsight Experience Replay [73] leverages hindsight reasoning to learn almost as much from undesired outcomes as from desired ones, as at hindsight, we can consider every final, possibly undesired state as a ‘goal’ state and update the value functions or policy network [74] accordingly. Goyal et al. [75] train an inverse environment model to simulate alternative past trajectories leading to the same rewarding state, hence leveraging hindsight reasoning to create a variety of highly rewarding trajectories. A recent line of work frames RL as a supervised sequence prediction problem, learning a policy conditioned on goal states or future returns [76–79]. These models are trained on past trajectories or offline data, where we have in hindsight access to states and returns, considering them as targets for the learned policy.
2306.16803#84
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]