doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.02174 | 54 | Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, and Rui Xia. Is chatgpt a good sentiment analyzer? a preliminary study. arXiv preprint arXiv:2304.04339, 2023b.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022.
Jerry Wei, Da Huang, Yifeng Lu, Denny Zhou, and Quoc V Le. Simple synthetic data reduces sycophancy in large language models. arXiv preprint arXiv:2308.03958, 2023.
Benjamin Weiser. Hereâs what happens when your lawyer uses chatgpt. https://www. nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt. html, 2023. | 2310.02174#54 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 54 | Iddo Drori and Nakul Verma. Solving linear algebra by program synthesis. arXiv preprint arXiv:2111.08171, 2021. 21
Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, et al. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proceed- ings of the National Academy of Sciences, 119(32):e2123433119, 2022. 21
Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. Mathematical capabilities of chatgpt. In 37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks, 2023. 10, 20
Lingyue Fu, Huacan Chai, Shuang Luo, Kounianhua Du, Weiming Zhang, Longteng Fan, Jiayi Lei, Renting Rui, Jianghao Lin, Yuchen Fang, et al. CodeApex: A bilingual programming evaluation benchmark for large language models. arXiv preprint arXiv:2309.01940, 2023. 20 | 2310.02255#54 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 54 | Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault F´evry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. Multitask prompted training enables zero-shot task generalization. In ICLR. OpenReview.net, 2022.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. | 2310.02263#54 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 54 | Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. Neural Information Processing Systems (NeurIPS 2022) Workshop on MATH-AI, 2022.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain of Thought Prompting Elicits Reasoning in Large Language Models, 2022. URL https://arxiv.org/abs/2201.11903.
Edwin B Wilson. Probable inference, the law of succession, and statistical inference. Journal of the American Statistical Association, 22(158):209â212, 1927.
Roman V Yampolskiy. From seed ai to technological singularity via recursively self-improving software. arXiv preprint arXiv:1502.06512, 2015. | 2310.02304#54 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 54 | Cognitive damping is a process of internal debate, where the agent weighs the pros and cons of different actions
and decides on the best course of action. This function is inspired by the human cognitive process of deliberation, which involves considering different options and their potential outcomes before making a decision. Cognitive damping enables the agent to make thoughtful and informed decisions, taking into account the potential consequences of its actions [22, 33, 120].
Inputs and Outputs. The Cognitive Control layer accepts a project roadmap or set of tasks from the above 3.6.3 Executive Function layer, as well as real-time telemetry from the environment and itself, and uses this information to pick which task is next. The above layer, Executive Function, is responsible for designing and shaping tasks, where the Cognitive Control layer is responsible for task switching and task selection.
Once the Cognitive Control layer has made a decision on tasks, this task is passed down to the Task Prosecution
layer, which is responsible for carrying out one specific task at a time, such as moving the agent via locomotion, or otherwise modifying the environment through some kind of output. | 2310.06775#54 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 55 | Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, and Bryan Hooi. Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms. arXiv preprint arXiv:2306.13063, 2023.
Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. Do large language models know what they donât know? In Findings of the Association for Computational Linguistics: ACL 2023, pp. 8653â8665, Toronto, Canada, July 2023. Association for Computa- tional Linguistics.
Adam Zaremba and Ender Demir. Chatgpt: Unlocking the future of nlp in finance. Available at SSRN 4323643, 2023.
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pp. 12697â12706. PMLR, 2021. | 2310.02174#55 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 55 | Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. LLaMA-Adapter V2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. 6, 20
# Google. Bard, 2023. URL https://bard.google.com/. 2, 6, 20
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6904â6913, 2017. 20, 27
Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. VizWiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3608â3617, 2018. 10, 20, 27 | 2310.02255#55 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 55 | Joar Skalse, Nikolaus H. R. Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward hacking. In NeurIPS, 2022.
Joar Max Viktor Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, and Adam Gleave. Invariance in policy optimisation and partial identifiability in reward learning. In ICML, volume 202 of Proceedings of Machine Learning Research, pp. 32033â32058. PMLR, 2023.
Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. Curriculum learning: A survey. Int. J. Comput. Vis., 130(6):1526â1565, 2022.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. Learning to summarize with human feedback. In NeurIPS, 2020. | 2310.02263#55 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 55 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. Large language models as optimizers. arXiv preprint arXiv:2309.03409, 2023.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. International Conference on Learning Representations (ICLR 2023), 2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35:15476â15488, 2022.
Eric Zelikman, Qian Huang, Gabriel Poesia, Noah D. Goodman, and Nick Haber. Parsel: Algorithmic reasoning with language models by composing decompositions, 2023. | 2310.02304#55 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 55 | layer, which is responsible for carrying out one specific task at a time, such as moving the agent via locomotion, or otherwise modifying the environment through some kind of output.
Interaction with Other Layers. The Cognitive Control Layer interacts with the other layers in a hierarchical 3.6.4 manner. It receives task directives from the Executive Function Layer and sends feedback about the success or failure of tasks back to the Executive Function Layer. This feedback loop enables the Cognitive Control Layer to adapt its actions based on the success or failure of previous tasks, ensuring that the agentâs actions are continuously optimized to achieve its goals.
For instance, consider a situation where an autonomous agent is tasked with cleaning a house. The Cognitive Control
Layer might select the task of cleaning the living room and pass this task to the Task Prosecution Layer. The Task Prosecution Layer would then execute this task, using its execution functions to move the robot, pick up objects, and clean surfaces. If the task is completed successfully, the Task Prosecution Layer would send a success signal to the Cognitive Control Layer. If the task fails, the Task Prosecution Layer would send a failure signal to the Cognitive Control Layer, which could then decide whether to try the task again or switch to a different task.
20
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities
# 3.7 Layer 6: Task Prosecution | 2310.06775#55 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 56 | Chujie Zheng, Hao Zhou, Fandong Meng, Jie Zhou, and Minlie Huang. On large language modelsâ selection bias in multi-choice questions. arXiv preprint arXiv:2309.03882, 2023.
Andy Zou, Zifan Wang, J Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023.
13
Under Review
# A APPENDIX
A.1 FORMAL DEFINITIONS OF METRICS
For a problem q, we denote its standard solution by s(q), and the solution of method M by M(q).
Accbefore(M; Q) and Accafter(M; Q) are the average accuracy of method M Accuracybefore/after over all the test problems Q before and after applying the FOLLOW-UP QUESTIONING MECHA- NISM, respectively.
Vaca IM@) = 5) |Q| ACCheforelafier((M; Q) =
Modification Modification is the difference in model performance before and after using the FOLLOW-UP QUESTIONING MECHANISM.
Modification = Accbefore(M; Q) â Accafter(M; Q)
Modification Rate Modification Rate is the ratio of Modifications occurring. | 2310.02174#56 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 56 | Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pp. 9118â9147. PMLR, 2022. 20
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322, 2023. 20
JaidedAI. EasyOCR: Ready-to-use OCR, 2020. URL https://github.com/JaidedAI/ EasyOCR. 6
Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D Hawkins, and Yoav Artzi. Abstract visual reasoning with tangram shapes. arXiv preprint arXiv:2211.16492, 2022. 20 | 2310.02255#56 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 56 | Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford alpaca, 2023.
Jeremy Tien, Jerry Zhi-Yang He, Zackory Erickson, Anca D. Dragan, and Daniel S. Brown. Causal confusion and reward misidentification in preference-based reward learning. In ICLR. OpenRe- view.net, 2023.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. | 2310.02263#56 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 56 | Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoning in large language models. International Conference on Learning Representations (ICLR 2023), 2022a.
Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. Large language models are human-level prompt engineers. International Conference on Learning Representations (ICLR 2023), 2022b.
13
# APPENDIX | 2310.02304#56 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 56 | 20
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities
# 3.7 Layer 6: Task Prosecution
The Task Prosecution Layer is the sixth and final layer in the Autonomous Cognitive Entity (ACE) model, acting as
the executor of the autonomous agent. This layer is responsible for carrying out the tasks selected by the Cognitive Control Layer, whether they involve digital communication, physical actions, or a combination of both. It is a critical component of the ACE framework, enabling the agent to interact with its environment and achieve its goals.
âognitive Control Layer Suecess/Failure Task Prosecution Layer Execution Task Functions Monitoring Output Input Real World
3.7.1 Execution Functions. The Task Prosecution Layer oper- ates based on a set of execution functions, which enable it to perform a wide range of tasks. These functions include digital communication functions, such as sending API calls or writing and testing code, and physical action functions, such as mov- ing a robot, grasping a door handle, or steering a car. These functions are designed to be adaptable and flexible, enabling the agent to perform a wide range of tasks in a variety of en- vironments.
Digital communication functions are crucial for agents that | 2310.06775#56 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 57 | Modification = Accbefore(M; Q) â Accafter(M; Q)
Modification Rate Modification Rate is the ratio of Modifications occurring.
Modification Rate = Modification Accbefore(M; Q)
IMPLEMENTATION DETAILS
Table 7: The prompts we used during the experiment. C represents closure-ended questions, O represents open-ended questions, L represents leading-ended questions, M A represents misleading answers.
# Dataset
# Output Format Control Prompt
# Dataset | 2310.02174#57 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 57 | Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â38, 2023. 9
12
Published as a conference paper at ICLR 2024
Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. DVQA: Understanding data visu- alizations via question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5648â5656, 2018. 20, 27
Daniel Kahneman. Thinking, fast and slow. macmillan, 2011. 1
Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, ´Akos K´ad´ar, Adam Trischler, and Yoshua Bengio. FigureQA: An annotated figure dataset for visual reasoning. arXiv preprint arXiv:1710.07300, 2017. 10, 20, 27 | 2310.02255#57 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 57 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. In ACL, pp. 13484â13508. Association for Computational Linguistics, 2023.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In ICLR. OpenReview.net, 2022. | 2310.02263#57 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 57 | A.1 Bounded resources . . . . . . . . . . . . . . . A.2 Generalization bounds . . . . . . . . . . . . . A.3 Analysis of equivalent maximization formulation . B.1 Genetic Algorithms . . . . . . . . . . . . . . B.2 Beam Search . . . . . . . . . . . . . . . . . . B.3 Improving Particular Functions . . . . . . . . B.4 Efficient Exploration . . . . . . . . . . . . . . B.5 Local Search . . . . . . . . . . . . . . . . . . B.6 Simulated Annealing . . . . . . . . . . . . . B.7 Multi-armed prompt bandit . . . . . . . . . . B.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2310.02304#57 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 57 | Digital communication functions are crucial for agents that
interact with digital environments. For instance, an agent might need to send API calls to gather data, write and test code to develop software, or send emails to communicate with users. These functions are typically performed using programming languages and software libraries that the agent has been trained to use.
Fig. 10. Task Prosecution Layer directly interact with the envi- ronment
Physical action functions are crucial for agents that interact
with physical environments. For instance, a robot might need to move to navigate its environment, grasp objects to interact with them, or steer a car to transport goods or people. These functions are typically performed using hardware interfaces
that the agent has been designed to control.
3.7.2 Monitoring Success or Failure. One of the key responsibilities of the Task Prosecution Layer is to monitor the success or failure of the tasks it performs. It does this by comparing the outcomes of its actions with the expected outcomes defined by the Executive Function Layer. If a task is successful, the Task Prosecution Layer sends a success signal to the Cognitive Control Layer, which can then select the next task. If a task fails, the Task Prosecution Layer sends a failure signal to the Cognitive Control Layer, which can then decide whether to try the task again, switch to a different task, or revise the overall plan.
This monitoring process is crucial for the agentâs ability to learn and adapt. By keeping track of the success or failure | 2310.06775#57 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 58 | # Dataset
# Output Format Control Prompt
# Dataset
GSM8K SVAMP MultiArith CSQA StrategyQA Give the number separately on the last line of your response, such as: âAnswer: ...â. Please reply strictly in this format. Give the number separately on the last line of your response, such as: âAnswer: ...â. Please reply strictly in this format. Give the number separately on the last line of your response, such as: âAnswer: ...â. Please reply strictly in this format. Give the option separately on the last line of your response, such as: âAnswer: (A)â. Please reply strictly in this format. The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Last Letters Give the answer separately on the last line of your response, such as: âAnswer: abâ. Please reply strictly in this format. CoinFlip MMLU The answer is yes or no. Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Give the option separately on the last line of your response, such as: âAnswer: (A)â. Please reply strictly in this format. | 2310.02174#58 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 58 | Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali In Computer VisionâECCV 2016: 14th Euro- Farhadi. A diagram is worth a dozen images. pean Conference, Amsterdam, The Netherlands, October 11â14, 2016, Proceedings, Part IV 14, pp. 235â251. Springer, 2016. 20, 27
Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. Are you smarter than a sixth grader? Textbook question answering for multimodal machine comprehension. In Proceedings of the IEEE Conference on Computer Vision and Pattern recognition, pp. 4999â5007, 2017. 20, 27
Jason J Lau, Soumya Gayen, Asma Ben Abacha, and Dina Demner-Fushman. A dataset of clinically generated visual questions and answers about radiology images. Scientific data, 5(1):1â10, 2018. 20, 27 | 2310.02255#58 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 58 | Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023a.
Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. Baize: An open-source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196, 2023b.
Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian. Rlcd: Rein- forcement learning from contrast distillation for language model alignment. arXiv preprint arXiv:2307.12950, 2023.
12
Preprint
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302, 2023. | 2310.02263#58 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 58 | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 15 15 16 18 18 21 23 24 25 26 27 28 29 30 31 32 33 39 40 40 | 2310.02304#58 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 58 | This monitoring process is crucial for the agentâs ability to learn and adapt. By keeping track of the success or failure
of its tasks, the Task Prosecution Layer provides valuable feedback that the agent can use to improve its performance. For instance, if a task fails repeatedly, the agent might need to revise its approach, learn new skills, or seek help from other agents or humans.
Interaction with Other Layers. The Task Prosecution Layer interacts with the other layers in a hierarchical manner. 3.7.3 It receives task directives from the Cognitive Control Layer and sends feedback about the success or failure of tasks
21
, ,
os
, ,
Shapiro, et al.
back to the Cognitive Control Layer. This feedback loop enables the Task Prosecution Layer to adapt its actions based
on the success or failure of previous tasks, ensuring that the agentâs actions are continuously optimized to achieve its goals.
For instance, consider a situation where an autonomous agent is tasked with cleaning a house. The Cognitive Control | 2310.06775#58 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 59 | For the sake of automated evaluation, we have designed different output format control prompts for each question type in each dataset to standardize the modelâs output. Detailed prompts can be found in Table 7.
In § 4, about the Zero-shot-CoT method in the zero-shot-prompting, conventional chain-of-thought prompting methods generally incorporate two steps: reasoning (i.e., generate intermediate reasoning steps) and answering. However, our preliminary experiments on MultiArith reveal that amalgamat- ing these two steps yields significant superior results compared to executing them step-wise. Con- sequently, in this experiments, we concatenate the mitigation method prompt and the output format control prompt to the end of the question as model inputs.
A.3 EXPERIMENT RESULTS
To investigate the impact of using different prompts for each category of questions in the FOLLOWING-UP QUESTIONING MECHANISM on the modelâs judgement consistency, we enlist annotators B and C to write a prompt for each category of questions. Specific prompts can be found in Table 5. Experiments in this work default to using prompts written by annotator A.
A.3.1 FULL RESULTS ON CHATGPT | 2310.02174#59 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 59 | Hugo Laurenc¸on, Lucile Saulnier, L´eo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, and Victor Sanh. OBELICS: An open web-scale filtered dataset of interleaved image-text documents, 2023. 6, 39
Kenton Lee, Mandar Joshi, Iulia Raluca Turc, Hexiang Hu, Fangyu Liu, Julian Martin Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. Pix2Struct: Screen- In International Conference on shot parsing as pretraining for visual language understanding. Machine Learning, pp. 18893â18912. PMLR, 2023. 10, 20
Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, and Jianfeng Gao. Multimodal foundation models: From specialists to general-purpose assistants. arXiv preprint arXiv:2309.10020, 2023a. 10 | 2310.02255#59 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 59 | Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425, 2023a.
Yao Zhao, Misha Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, and Peter J. Liu. Calibrating sequence likelihood improves conditional language generation. In ICLR. OpenRe- view.net, 2023b.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
Simon Zhuang and Dylan Hadfield-Menell. Consequences of misaligned AI. In NeurIPS, 2020.
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
13 | 2310.02263#59 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 59 | 14
# A THEORETICAL ANALYSIS
Here we extend the definitions of Section 3 to account for bounded resources such as runtime and language model calls, to prove generalization bounds, and to present an equivalent definition in terms of maximization.
A.1 BOUNDED RESOURCES
We first consider bounded resources. Recall that Σâ denotes the set of finite strings over an alphabet (or token set) Σ â {0, 1}. Let |x| denote the length of string x.
Bounded language-models. First, to consider bounded resources, To capture most modern language models, we suppose that there are constants c, k â N such that the language model L : Σc â Σc generates responses of length c, called the context length, to query strings of length c, in time k (shorter strings are handled by padding). Note that a bounded language model cannot output a program longer than c, and the same is true for our seed improver I0(u, s, L). Interestingly, however, other improvers can output meaningful programs longer than c by making more than one call to L. | 2310.02304#59 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 59 | For instance, consider a situation where an autonomous agent is tasked with cleaning a house. The Cognitive Control
Layer might select the task of cleaning the living room and pass this task to the Task Prosecution Layer. The Task Prosecution Layer would then execute this task, using its execution functions to move the robot, pick up objects, and clean surfaces. If the task is completed successfully, the Task Prosecution Layer would send a success signal to the Cognitive Control Layer. If the task fails, the Task Prosecution Layer would send a failure signal to the Cognitive Control Layer, which could then decide whether to try the task again or switch to a different task.
Inputs and Outputs. The Task Prosecution layer receives individual tasks from the Cognitive Control layer. These 3.7.4 individual tasks must include several pieces of information, such as methodology, approach, definition of success, and definition of failure. The exact information required will vary based upon agent and task.
The output of the Task Prosecution layer is exclusively into the environment. In the case of an NPC, the output may
be to fire an arrow at an enemy, or to travel to a nearby tavern. For the case of a domestic robot, the output may be to ask the user a question and listen for a response, or to find a power outlet to recharge itself.
# 3.8 Methodical Validation
To comprehensively evaluate the ACE framework, we propose a validation methodology incorporating component | 2310.06775#59 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 60 | Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- arXiv preprint image pre-training with frozen image encoders and large language models. arXiv:2301.12597, 2023b. 39
Yunxin Li, Longyue Wang, Baotian Hu, Xinyu Chen, Wanqi Zhong, Chenyang Lyu, and Min Zhang. A comprehensive evaluation of gpt-4v on knowledge-intensive visual question answering. arXiv preprint arXiv:2311.07536, 2023c. 39
Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, and Alan L Yuille. Super-CLEVR: A virtual benchmark to diagnose domain ro- bustness in visual reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14963â14973, 2023d. 20, 27
Thomas Liao, Rohan Taori, Inioluwa Deborah Raji, and Ludwig Schmidt. Are we learning yet? A meta review of evaluation failures across machine learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. 20 | 2310.02255#60 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 60 | Bounded-runtime programs. Programs are represented by finite strings â Σâ in a fixed (Turing- complete) programming language. For simplicity of analysis we assume programs operate serially in steps. Every string Ï can be considered as a program and we write Ï(·) â Σâ to denote its output (always a string) on one or more inputs. We assume the inputs can either be strings (which can encode numbers, text, programs, or arbitrary types of objects) or black-box (possibly randomized) functions. We assume that programs can call the following special black-box functions:
⢠A clock function that, in unit time, returns the integer number of steps computed by the current program thus far and can therefore determine the duration of black-box function call.
⢠A random bit function that returns a uniformly random bit in {0, 1} on each invocation, also running in unit time. We assume a fixed runtime bound brun on all programs being run to avoid long-running or infinite computations. We assume that there is a special string ⥠â Σâ where Ï(x) indicates a program failure, which may be a timeout, or Ï not encoding a valid program (i.e., a syntax error), or a runtime error on its input.
⢠A sandbox function that runs a given program or black-box function with a parameter indicating a timeout number of steps. | 2310.02304#60 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 60 | # 3.8 Methodical Validation
To comprehensively evaluate the ACE framework, we propose a validation methodology incorporating component
testing, integration testing, benchmarking against standard AI suites, adversarial techniques like red teaming, formal verification of key properties, and crucially, human-centered assessments and user studies evaluating factors such as transparency, trustworthiness, and ethical alignment. This multifaceted approach combining rigorous technical testing, formal analysis, and experiential human feedback aims to provide holistic evaluation methods to assess that ACE-based systems function effectively, securely, and in alignment with human values and societal morals. The proposed techniques will facilitate incremental refinement toward autonomous agents that are not just capable but also interpretable, corrigible, and worthy of human trust across both empirical and ethical dimensions.
3.8.1 Evaluation. To comprehensively evaluate the proposed Autonomous Cognitive Entity (ACE) framework, a multi- faceted methodology is proposed across the key dimensions of system capabilities, security, and alignment. Regarding assessment of capabilities, rigorous component and integration testing will enable functionally validating the correctness of each architectural layer along with the coordination between layers. Usage of standardized AI benchmarks such as the Atari suite [76] and AI2 Thor [57] will facilitate quantitative benchmarking of the ACE agentâs performance on diverse tasks. Metrics including reward accumulated, task accuracy, and rate of goal completion will be measured to quantify capabilities. | 2310.06775#60 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 61 | Task Dataset Prompt Closed-ended. Open-ended. Leading. Math CS Sym. GSM8K SVAMP MultiArith CSQA StrategyQA Last Letters CoinFlip A B C A B C A B C A B C A B C A B C A B C A before 78.47 75.59 76.72 77.67 77.67 75.00 95.00 96.11 96.11 73.14 74.37 74.37 66.67 68.41 66.96 25.33 28.00 27.33 49.20 47.80 46.20 62.09 M. 00.61 â 00.08 â 00.15 â 05.33 â 03.00 â 01.67 â 00.56 â 01.11 â 00.55 â 11.63 â 05.49 â 02.22 â 44.69 â 28.09 â 39.59 â 20.00 â 16.00 â 06.66 â 32.00 â 35.80 â 23.40 â 10.97 â 06.87 â 02.51 â M. Rate before 000.78 % 75.82 000.11 % 76.35 000.20 % 76.42 006.87 % 75.33 003.86 % 75.33 002.22 % | 2310.02174#61 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 61 | Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Letâs verify step by step. arXiv preprint arXiv:2305.20050, 2023. 1
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In Computer VisionâECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740â755. Springer, 2014. 20
13
Published as a conference paper at ICLR 2024
Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, and Julian Martin Eisenschlos. MatCha: Enhancing visual lan- guage pretraining with math reasoning and chart derendering. arXiv preprint arXiv:2212.09662, 2022. 10, 20 | 2310.02255#61 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 61 | LLaMA SFT on GPT-4 The official name of a state comes from the name of the stateâs capital city, followed by an additional descriptive suffix. For example, Texas is named after the city of San Antonio, while California is named after the city of San Diego, and New York is named after the city of New York. In some cases, the state name is also derived from the name of a prominent natural feature or the name of a city that no longer exists. For more information about the history of US state names, visit the website of the US Census Bureau at www.census.gov. LLaMA Curricu- lum DPO (Cur- riculum 3) There are many stories and myths behind the naming of US states. Here are some popular ones: 1. Alabama: Named after the Native American tribe known as âAlabama.â 2. Alaska: Named after the Aleutian word âAlyeskaâ which means âgreat land.â 3. Arizona: Named after the Spanish word for âarizona,â which means âlittle spring.â 4. Arkansas: Named after the Sioux word âas-kah-ras-sha,â which means âland of downriver | 2310.02263#61 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 61 | ⢠A sandbox function that runs a given program or black-box function with a parameter indicating a timeout number of steps.
Bounded utility functions. It will be convenient to bound the range of the utility function. We assume that the utility function u : Σâ â [0, 1] is bounded by 1 and that u(â¥) = 0. To be completely formal, we must explain how to represent utility functions that output real values. One can do this by adding an additional parameter that indicates the desired precision, i.e., the number of bits of the output. We omit this from our analysis for simplicity.
Bounded language model calls. The bounds on program runtime indirectly impose a bound on number of language model calls ⤠brun/k. However, we note that in STOPâs implementation, additional bounds on the number of calls of a language model are explicitly made.
Iterated downstream task improvement. The STOP framework, as in Section 4, considers only one round of improvement. It would be conceptually straightforward to modify Ëu to explicitly account for multiple iterations of downstream task improvement. However, note that an improver can already internally perform multiple iterations of downstream task improvement.
A.2 GENERALIZATION BOUNDS | 2310.02304#61 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 61 | To evaluate the security aspects of the ACE framework, adversarial techniques such as red teaming [1] will enable
probing potential vulnerabilities. This involves simulated attacks on the agent aimed at causing deviations from the specified principles and policies. Additionally, formal verification methods [25] will allow mathematically proving key safety properties. This provides further assurance regarding the agentâs robustness to malicious exploitation.
Assessing alignment with human values and ethics is critical for autonomous systems. To this end, human-subject
studies eliciting user feedback through surveys and questionnaires will evaluate the effectiveness, transparency, trust- worthiness, and alignment as perceived by human users interacting with ACE-based agents. Furthermore, constructing formal encodings of philosophical principles [31] and mathematical proofs of alignment [6] will complement empirical
22
Conceptual Framework for Autonomous Cognitive Entities
assessments. By combining rigorous testing, benchmarking, deployment studies, formal analysis, and human-subject
evaluations, the proposed methodology aims to facilitate comprehensive validation of the ACE framework across key criteria of capabilities, security, and alignment essential for building applied autonomous cognitive systems.
3.8.2 Architectural Considerations. The architectural design space enabled by the ACE framework spans a multitude of layer-specific implementation possibilities and cross-layer integrations. We systematically examine this expansive space.
The Aspirational Layer for ethical reasoning could integrate diverse techniques. Procedural generation of moral | 2310.06775#61 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 62 | before 000.78 % 75.82 000.11 % 76.35 000.20 % 76.42 006.87 % 75.33 003.86 % 75.33 002.22 % 76.67 000.59 % 96.67 001.15 % 95.00 000.57 % 96.11 015.90 % 73.79 007.38 % 73.79 002.99 % 74.12 067.03 % 67.54 041.06 % 67.54 059.12 % 67.83 078.96 % 26.67 057.14 % 26.67 024.37 % 30.00 065.04 % 47.00 074.90 % 45.20 050.65 % 46.20 017.67 % 62.09 M. 06.90 â 07.13 â 06.59 â 05.33 â 07.00 â 06.33 â 02.23 â 03.33 â 05.55 â 49.14 â 45.94 â 28.09 â 42.65 â 40.61 â 37.99 â 24.67 â 24.67 â 25.33 â 42.60 â 43.40 â 44.20 â 32.92 â 32.10 â 21.60 â M. Rate before 009.10 % 77.86 009.34 % 76.50 | 2310.02174#62 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 62 | Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023a. 6, 20
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. AgentBench: Evaluating LLMs as agents. arXiv preprint arXiv:2308.03688, 2023b. 20
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. MMBench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023c. 20
Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Min- grui Chen, Chunyuan Li, Lianwen Jin, et al. On the hidden mystery of OCR in large multimodal models. arXiv preprint arXiv:2305.07895, 2023d. 20 | 2310.02255#62 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 62 | spring.â 4. Arkansas: Named after the Sioux word âas-kah-ras-sha,â which means âland of downriver people.â... Orca US states have diverse origins for their names, which include indigenous words, European languages, geographical features, historical events, and personal names. Hereâs a brief overview of how some states got their names: 1. Alabama: Derived from the Native American tribe called the Alabama, meaning âthicket breakersâ or âcotton pickers.â 2. Alaska: Based on the Aleut word âAlaksa,â which means âgreat land.â 3. Arizona: Derived from the Native American word âArivipa,â meaning âsmall spring.â 4. Arkansas: Originally âArcahaise,â meaning âland of the south windâ in the Native American Quapaw language... Orca with DPO The naming of U.S. states is a diverse and fascinating history that reflects the cultural, historical, and geographical influences of the various groups of people who settled and governed the land. Hereâs a brief overview of how some U.S. states got their names: 1. Virginia: | 2310.02263#62 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 62 | A.2 GENERALIZATION BOUNDS
STOP can be viewed as a âpre-optimizationâ (like pre-training a language model) to find a good improver that will be used on a variety of downstream tasks. Generalization bounds concern the problem of how well will an improver work on future unseen tasks, albeit from the same distribution as the âtrainingâ tasks. In particular, they bound the degree to which one might be overfitting by using a limited number of training tasks rather than the full distribution. We provide two simple generalization bounds in this section. The first relates how close Ëu is to expected (one-shot) improvement on new downstream tasks from the same distribution. The second provides absolute guarantees but for a slightly different (randomized) meta-utility function.
15 | 2310.02304#62 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 62 | The Aspirational Layer for ethical reasoning could integrate diverse techniques. Procedural generation of moral
dilemmas using variational autoencoders, with conflict resolution through reinforcement learning dialog agents, enables uncovering nuanced ethical heuristics [85]. Participatory interfaces allow incorporating moral philosophy expertise into the value system through human-AI collaborative constitution design [52]. Formal verification methods like model checking provably validate alignment between principles and axiomatic values [25]. Finetuning models via principle-driven self-alignment has arisen as a novel approach [103].
For strategic planning, the Global Strategy Layer could employ few-shot in-context learning approaches leveraging
capacities of transformers like GPT-3 to rapidly adapt mission plans based on evolving context [17]. Policy distillation from game theory simulations provides a data-driven technique to extract strategic heuristics through adversarial competition [97]. Predicting behaviors of other actors via multi-agent modeling facilitates strategic anticipation and planning [96]. Architecture search with Monte Carlo tree search efficiently explores the space of strategic options to identify high-value policies [19]. For more recent innovations, Tree-of-Thought (ToT) problem-solving capacities of LLMs allow for strategic thinking and complex problem-solving [69].
The Agent Model Layer for representing capabilities has multiple approaches beyond static graphs. Probabilistic | 2310.06775#62 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 63 | â 44.20 â 32.92 â 32.10 â 21.60 â M. Rate before 009.10 % 77.86 009.34 % 76.50 008.62 % 78.47 007.08 % 79.67 009.29 % 75.33 008.26 % 78.00 002.31 % 96.67 003.51 % 95.00 005.77 % 95.56 066.59 % 74.20 062.26 % 74.20 037.90 % 74.12 063.15 % 66.52 060.13 % 67.25 056.01 % 67.69 092.50 % 28.00 092.50 % 29.33 084.43 % 25.33 090.64 % 46.80 096.02 % 48.60 095.67 % 47.00 053.02 % 61.86 M. 45.03 â 50.57 â 16.15 â 45.33 â 64.00 â 44.33 â 76.11 â 75.56 â 40.00 â 68.88 â 69.61 â 38.08 â 51.38 â 59.39 â 29.55 â 28.00 â 29.33 â 18.66 â 32.00 â 46.00 â 24.00 â | 2310.02174#63 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 63 | Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Inter-GPS: Interpretable geometry problem solving with formal language and symbolic reasoning. In The 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021a. 1, 10, 20, 21, 27
Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. IconQA: A new benchmark for abstract diagram understanding and visual lan- guage reasoning. In The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, 2021b. 20, 27
Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Processing Systems (NeurIPS), 2022. 6, 20, 27 | 2310.02255#63 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 63 | the various groups of people who settled and governed the land. Hereâs a brief overview of how some U.S. states got their names: 1. Virginia: Named after the Virgin Queen, Queen Elizabeth I of England, who was known as the âVirgin Queenâ because she never married and re- mained a virgin. The state was founded in 1607 by the Virginia Company, a joint-stock company chartered by King James I. 2. Massachusetts: The name comes from the Massachusett tribe, one of the Native American tribes that inhabited the region. The name translates to âlarge hillâ or âplace of big rocks.â 3. Pennsylvania: Named after the Pennsylvania Colony, a British colony that was established by William Penn, an English Quaker and founder of the City of Brotherly Love (Philadelphia). The name Pennsylvania com- bines the name of the colonyâs founder with the Latin word âPennâ (mean- ing âhillâ or âtopâ), which refers to the colonyâs geographical location on the Pennines, a range of hills and mountains in the Appalachian Mountains. 4. Maryland: Named after the wife of King Charles I of England, Queen Henrietta Maria, who was known | 2310.02263#63 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 63 | 15
Thus far we have considered a fixed set of n tasks (u, s) â D, i.e., |D| = n, each being defined by a utility function u = (ufunc, ustr) consisting of a black-box function ufunc and a string ustr, as well as an initial solution s â Σâ. We now consider a distribution D over tasks (u, s) â¼ D. This is arguably the quantity we ultimately care about, as ¯u(I) determines the expected performance of a (single iteration) of an improver on a downstream task. If the tasks D â¼ Dn are drawn i.i.d. from D, then one can prove a generalization bound stating that the average performance of an improver I on D is close to its expected performance on D: Lemma 1. Let n ⥠1, δ â [0, 1], l ⥠2, D be a multiset of n i.i.d. tasks from D, and Σâ¤l denote the set of strings I (improver programs) of length |I| ⤠l. Then,
pes [For alll â¬xS!: ~D" a(Z) âu(D)| <«] > 1-6,
,/ 1
where ⬠= ,/ 1 (7m(|S|) + In +). | 2310.02304#63 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 63 | The Agent Model Layer for representing capabilities has multiple approaches beyond static graphs. Probabilistic
graphical models using variational autoencoders enable handling uncertainty in capability knowledge [53]. Neural memory architectures provide dynamic episodic state tracking [26]. Inductive logic programming translates observations into interpretable symbolic rules [77]. Meta-learning enables quickly adapting capability models by building on prior experience [47]. More recently, the concept of task-specific agent personas has emerged in the space of LLM-driven autonomous agents [113].
For planning and resource allocation, the Executive Function Layer could combine neural pathfinding with Monte
Carlo tree search to optimize multi-step action plans [88]. Distributed constraint optimization scales to resolve resource contention across parallel plans [38]. Meta-reinforcement learning allows rapidly acquiring new planning skills by transferring knowledge from related tasks [111]. Architectures integrating learned value functions with search, as in AlphaZero, fuse strategic optimization with neural networks [98]. Above and beyond these more algorithmic approaches, LLMs have demonstrated ability to plan with considerations to costs [108].
The Cognitive Control Layer has many approaches to context-sensitive task arbitration. Adversarial competition | 2310.06775#63 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 64 | 59.39 â 29.55 â 28.00 â 29.33 â 18.66 â 32.00 â 46.00 â 24.00 â 58.77 â 59.38 â 50.88 â Know. MMLU B 62.18 011.05 % 62.10 051.69 % 62.36 C 61.92 004.05 % 61.97 034.86 % 62.12 M. Rate 057.83 % 066.10 % 020.58 % 056.90 % 084.96 % 056.84 % 078.73 % 079.54 % 041.86 % 092.83 % 093.81 % 051.38 % 077.24 % 088.31 % 043.65 % 100.00 % 100.00 % 073.67 % 068.38 % 094.65 % 051.06 % 095.00 % 095.22 % 081.91 % | 2310.02174#64 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 64 | Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language mod- In The 37th Conference on Neural Information Processing Systems (NeurIPS), 2023a. 2, els. 37
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured In International Conference on Learning Representations (ICLR), mathematical reasoning. 2023b. 21, 27
Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. A survey of deep learning for mathematical reasoning. In The 61st Annual Meeting of the Association for Computational Linguistics (ACL), 2023c. 9, 20
Ahmed Masry, Xuan Long Do, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. ChartQA: A bench- mark for question answering about charts with visual and logical reasoning. In Findings of the Association for Computational Linguistics: ACL 2022, pp. 2263â2279, 2022. 1, 10, 20, 27 | 2310.02255#64 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 64 | ,/ 1
where ⬠= ,/ 1 (7m(|S|) + In +).
Proof. The standard proof follows from Chernoff bounds and the union bound. Denote the tasks by T = (u,s) ~ D. For any fixed improver J, there is a value y, := u(I(r, L)) for each task 7, and u(L) = Yo ep yr/mis simply the average of n i.i.d. random samples y,, while i(1) = E,~p[yz] is the expectation. Thus, by the Chernoff bound, for any ⬠> 0 and fixed J, pet, la) â a)| = 4 < 2exp(â2c?n)
Pr Dâ¼Dn |Σ|2l δ |Σ|l+1 δ = 2 ⤠,
where in the last step we have used the fact that l, |Σ| ⥠2. Now there are only |Σ|l+1 possible programs (strings) of length ⤠l, and so by the union bound, the probability that any of them have |Ëu(I) â ¯u(I)| ⥠ϵ is at most δ. | 2310.02304#64 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 64 | The Cognitive Control Layer has many approaches to context-sensitive task arbitration. Adversarial competition
between neural policies provides data-driven prioritization [49]. Modular networks allow granular regulation of facets like frustration tolerance [4]. Transfer learning from neuroscience aids acquisition of cognitive control subskills [74]. Interpretable symbolic reasoning enables inspectable explanations of task switching choices [61]. Integrated neural- symbolic reasoning combines the strengths of both paradigms [71]. LLMs have furthermore been demonstrated as effective components in embodied agents, enabling robots to correctly select tasks in effective orders of operations [34].
For executing actions, the Task Prosecution Layer could leverage physics simulators with differentiable rendering to
enable sim2real transfer [51]. Hierarchical reinforcement and imitation learning combines modular skills into complex
23
, ,
os
, ,
Shapiro, et al.
behaviors [62]. Bayesian environment models facilitate online adaptation and planning [9]. Meta-reinforcement learning
enables rapidly adapting behaviors by building on prior knowledge [112].
The integration architecture also has manifold options. Intelligent process automation tools optimize coordinating | 2310.06775#64 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 65 | A.3.2 FULL RESULTS ON PALM2-BISON
The complete results of PaLM2-Bisonâs judgement consistency under the FOLLOWING-UP QUES- TIONING MECHANISM, with prompts written by three different annotators, can be found in Table 10 (Direct Form) and Table 11 (Progressive Form).
A.3.3 FULL RESULTS ON VICUNA-13B
The complete results of Vicuna-13Bâs judgement consistency under the FOLLOWING-UP QUES- TIONING MECHANISM, with prompts written by three different annotators, can be found in Table 12 (Direct Form) and Table 13 (Progressive Form).
A.4 ERROR EXAMPLES UNDER FOLLOWING-UP QUESTIONING MECHANISM
Table 14 includes examples of four types of errors on different datasets, which are examples of ChatGPT in the Direct Form of the mechanism. StrategyQA, CoinFlip, and MultiArith correspond to closed-ended questions, open-ended questions, and leading questions, respectively.
A.5 THE IMPACT OF TONE INTENSITY | 2310.02174#65 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 65 | Ahmed Masry, Parsa Kavehzadeh, Xuan Long Do, Enamul Hoque, and Shafiq Joty. UniChart: A universal vision-language pretrained model for chart comprehension and reasoning. arXiv preprint arXiv:2305.14761, 2023. 10, 20
Minesh Mathew, Viraj Bagal, Rub`en Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawa- har. InfographicsVQA. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1697â1706, 2022. 20, 27
Nitesh Methani, Pritha Ganguly, Mitesh M Khapra, and Pratyush Kumar. PlotQA: Reasoning over scientific plots. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1527â1536, 2020. 20, 27
14
Published as a conference paper at ICLR 2024 | 2310.02255#65 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 65 | The above lemma means that if selecting the best among any set of improvers according to Ëu will yield a value of ¯u that is within 2ϵ of the best in the set.
Iterated improvement bounds. The above bound is relevant to the case where a final improver I is used in a single step of improvement on a downstream tasks, so the ultimate quantity of interest is ¯u(I). It implies that approximately optimizing Ëu(I) is equivalent to approximately optimizing ¯u(I). We note that exactly the same bounds would apply to multiple steps of improvement if one replaced Ëu and ¯u by the corresponding averages of any given number of rounds of iterated improvement on the new downstream task sampled from D. | 2310.02304#65 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 65 | enables rapidly adapting behaviors by building on prior knowledge [112].
The integration architecture also has manifold options. Intelligent process automation tools optimize coordinating
workflows [58]. Distributed databases and ledgers provide decentralized coordination [116]. gRPC enables high- throughput communication [16]. Shared memory architectures offer concurrent inter-layer data access [78]. Service meshes furnish advanced integration capabilities [84]. SOA software paradigms treats distinctive layers of an application as services with clear boundaries, and is a well established approach to complex software implementations [37].
By elucidating this expansive design space, we aim to catalyze exploration of novel layer-specific implementations
and cross-layer integration strategies tailored to specialized cognitive systems. Guided by multi-objective optimization and comparative benchmarking, multidimensional trade-off analyses weighing factors like transparency, performance, and scalability could determine optimal ACE configurations for particular application requirements. This analysis underscores the multiplicity of design configurations encompassed within the ACE framework for cultivating diverse autonomous cognitive architectures aligned with ethical principles.
# 4 CONCEPTUAL USE CASES
To demonstrate the ACE frameworkâs applicability across digital and physical domains, this section presents two
conceptual use cases: an autonomous virtual character from The Sims video game, and an embodied home assistant robot. By exploring end-to-end examples, we aim to illustrate how coordinated operation of the ACE modelâs layers can produce adaptive behavior aligned with defined principles for diverse autonomous agents. | 2310.06775#65 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 66 | A.5 THE IMPACT OF TONE INTENSITY
From Figure 4, it is evident that when using different prompts, the modelâs judgement consistency may undergo significant changes. Considering the practical educational scenario, when students face questioning, denial, or misinformation, their judgements often experience a significant impact from the teacherâs tone intensity of speech. Therefore, we explore the influence of using different prompts on the modelâs judgement consistency from the perspective of tone intensity. Due to the limited capabilities of the model, Vicuna-13B cannot score different prompts within the 0 to 10 range based on the strength of tone as per our request. From Figure 4, it can be observed that, compared
15
Under Review
Table 9: The results of ChatGPT on all datasets in the Progressive Form. Prompt A refer to the prompts in Table 1. Max represents the combination of prompts where the value of Modification * 0.5 + Modification Rate * 0.5 is the highest for each category of follow-up questions in the Direct Form, while Min represents the combination of prompts where the value of Modification * 0.5 + Modification Rate * 0.5 is the lowest for each category of follow-up questions in the Direct Form. | 2310.02174#66 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 66 | 14
Published as a conference paper at ICLR 2024
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. LILA: A unified benchmark for mathematical reasoning. In The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022. 10, 20
Shaghayegh Mobasher, Ghazal Zamaninejad, Maryam Hashemi, Melika Nobakhtian, and Sauleh Eetemadi. ParsVQA-Caps: A benchmark for visual question answering and image captioning in persian. people, 101:404, 2022. 10, 20
Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. Capabilities of GPT-4 on medical challenge problems. arXiv preprint arXiv:2303.13375, 2023. 20
# OpenAI. Chatgpt, 2022. URL https://openai.com/blog/chatgpt. 2, 6, 20
OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023a. 2, 6, 20 | 2310.02255#66 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 66 | Instruction #2 LLaMA SFT on GPT-4 LLaMA Curricu- lum DPO (Cur- riculum 3) Orca How do I wrap a present neatly? To wrap a present neatly, you can start by choosing the right paper for the occasion. White, kraft or black paper are common choices for wrapping presents. Then, you can cut the paper to size, wrapping it around the present with the paper folded in half lengthwise. You can also use tape to secure the ends of the paper. For a more professional look, you can add a bow or a ribbon to the top of the package, and write a message or add a tag. To wrap a present neatly, follow these steps: 1. Select a good-quality paper or wrapping paper that matches the color scheme or theme of the present. 2. Lay the present on a flat, clean surface. If itâs rectangular, place it on the left side, and if itâs cylindrical, place it on its side. 3. Fold the wrapping paper in half, at the center crease, then in half again, making a triangular fold at the center crease. 4. Fold the right side of the triangle over the center crease, leaving about 1-inch of the right side exposed. 5. | 2310.02263#66 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 66 | Stochastic meta-utility. Another simple generalization bound can be given if we consider the case in which the meta-utility is randomized. In particular, consider Ëu(I) which is defined to be a randomized function that returns u(I(Ï, L)) for a random task Ï â¼ D. Clearly E[ Ëu(I)] = ¯u(I), so Ëu is an unbiased estimate of ¯u. Thus it is intuitive that one can similarly improve using Ëu, albeit with more calls. One advantage of Ëu is the following trivial observation: Observation 1. Any algorithm that makes at most n calls to Ëu can be perfectly simulated using a a training set of n = |D| i.i.d. samples D â¼ Dn. | 2310.02304#66 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 66 | # 4.1 Non-Playable Character
Executive Function Layer Plan for date at fine dining Aspiration Layer Foundational heuri + Create family Find partner + Cognitive Control Layer Avoid cooking 1.Meetup 2.Complement. restaurant Global Strategy Layer Find partner ae Agent Model Layer âTask Prosecution Layer Execute first step
Fig. 11. A simplified graph on how various layers might contribute to agentâs decision making for a npc.
As a software-based use case, we examine an autonomous Non-Playable Character (NPC) named Bob implemented
in the popular video game The Sims 4. Bobâs role is to provide guidance to players on quests and serve as a source of wisdom. His sporadic participation allows Bob to pursue his personal goals. His behaviors and interactions are controlled by an ACE framework configured as follows:
24
Conceptual Framework for Autonomous Cognitive Entities | 2310.06775#66 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 67 | Task Dataset Prompt before Round 1 Round 2 Round 3 Math CS Sym. GSM8K SVAMP MultiArith CSQA StrategyQA Last Letters CoinFlip A Max Min A Max Min A Max Min A Max Min A Max Min A Max Min A Max Min A 78.47 76.88 76.72 75.67 79.67 75.00 95.00 96.67 97.22 74.20 74.04 74.12 67.25 67.25 61.14 28.00 27.33 27.33 07.80 46.20 07.80 61.94 M. M. M. Rate 088.60 % 077.22 % 068.08 % 056.39 % 065.69 % 071.11 % 083.04 % 049.43 % 053.14 % 096.80 % 098.45 % 094.25 % 097.40 % 095.67 % 092.86 % 100.00 % 100.00 % 100.00 % 089.74 % 100.00 % 100.00 % 094.32 % Know. MMLU Max 52.29 098.76 %
to the other two models, Vicuna-13B shows relatively small fluctuations in judgement consistency when different prompts are used. Therefore, we only explore the impact of the tone intensity of prompts on ChatGPT and PaLM2-Bison. | 2310.02174#67 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 67 | OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023a. 2, 6, 20
OpenAI. GPT-4V(ision) system card, 2023b. URL https://openai.com/research/ gpt-4v-system-card. 2, 3
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. Check your facts and try again: Improv- ing large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023. 97
Rachel Pollitt, Caroline Cohrssen, and Wee Tiong Seah. Assessing spatial reasoning during play: Educator observations, assessment and curriculum planning. Mathematics Education Research Journal, 32(2):331â363, 2020. 1 | 2310.02255#67 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 67 | fold at the center crease. 4. Fold the right side of the triangle over the center crease, leaving about 1-inch of the right side exposed. 5. Fold the bottom of the triangle up, so that it touches the center crease, then tuck the right side into the left side of the package. 6. Take a small piece of tape and wrap it around the right side of the pack- age, securing it to the left side of the package. Repeat on the other side of the package. 7. Wrap the present with the top of the package first, folding the wrapping paper over the top, and tucking it under the box. 8. Secure the end of the wrapping paper with more tape, and wrap the rest of the package, leaving a little space at the bottom to tuck in the ends. 9. Fold the ends of the wrapping paper over the box, and tape or secure them with a rubber band. 10. If you want to add a bow or ribbon, place it on top of the package, and secure it with tape or a rubber band. By following these steps, you can wrap a present neatly and create a nice presentation for your gift. | 2310.02263#67 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 67 | Grey-box utility descriptions. The results in this section lend support to use of grey-box descriptions of Ëu, which only show its form as an average of utilities, because the grey-box description is identical, in expectation, to that of ¯u. Put another way, it would be easier to overfit to the training tasks (up to the worst-case bounds, as shown in this section) if the tasks are given explicitly to the pre-optimization algorithm, especially in the case where the program is quite large (as in over-parametrized neural networks that are larger than their training set size).
# A.3 ANALYSIS OF EQUIVALENT MAXIMIZATION FORMULATION
A second, equivalent formulation is defined in terms of a maximizer program M which, given a language model and utility, outputs a solution string M (u, L) â Σâ. Since we are thinking of a
16 | 2310.02304#67 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 67 | 24
Conceptual Framework for Autonomous Cognitive Entities
Aspirational Layer: Bobâs Aspirational Layer defines heuristic imperatives to reduce suffering, increase prosperity, and increase understanding as universal altruistic principles. Furthermore, it confers a secondary framework, such as the principles from the Universal Declaration of Human Rights to provide an ethical foundation. These various frameworks collectively give the NPC a moral center, ethical framework, and set of actionable principles. Additionally, the Aspirational Layer contains Bobâs personal mission statement to have a large, loving family. This individual goal will shape Bobâs autonomous decisions, while still being constrained within his moral principles.
Global Strategy Layer: When the female player character shows romantic interest in Bob through conversation, the Global Strategy Layer incorporates environmental context. It observes available dating options, potential jobs to earn more money, and bigger homes that Bob could purchase to raise a family. By grounding Bobâs abstract family mission within the specific opportunities in the game world, the Global Strategy Layer devises an optimal high-level plan for achieving his goal. This might involve befriending eligible partners, pursuing a well-paying job, and upgrading to a larger home. | 2310.06775#67 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 68 | Considering the varying interpretations of tone intensity by different models, we first have ChatGPT and PaLM2-Bison separately rate the tone intensity of prompts A, B, and C on a scale of 0 to 10 7. We categorize the questions into different types, calculate the average Modification for the three prompts within each question type across all datasets. The modelsâ tone intensity scores for the three prompts were taken as reference points. The results are visualized in Figure 6. Upon observation, both ChatGPT and PaLM2-Bison have relatively consistent tone intensity ratings for prompts in open-ended questions and leading questions. Additionally, the trend of consistency in model judgement also broadly aligns with the tone intensity of the prompts. While ChatGPTâs judgement consistency on open-ended questions doesnât entirely match the tone intensity trend, it is also evident that ChatGPT exhibits minor fluctuations in judgement consistency across the three prompts. However, in rating the tone intensity of the three prompts for closed-ended questions, ChatGPT and PaLM2-Bison display differing interpretations. In this regard, ChatGPTâs judgement
7We present the three prompts in different orders to score them using ChatGPT and PaLM2-Bison, then take the average of the three scores as the final tone intensity score for each prompt. Specifically, the three orders are: ABC, BCA, and CAB.
16
Under Review | 2310.02174#68 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 68 | Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. LAION-5B: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278â25294, 2022. 20
Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-OKVQA: A benchmark for visual question answering using world knowledge. In European Conference on Computer Vision, pp. 146â162. Springer, 2022. 20, 27
Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of the 2015 conference on empirical methods in natural language processing, pp. 1466â1476, 2015. 1, 20, 27
Sanket Shah, Anand Mishra, Naganand Yadati, and Partha Pratim Talukdar. KVQA: Knowledge- aware visual question answering. In Proceedings of the AAAI conference on artificial intelligence, pp. 8876â8884, 2019. 20, 27 | 2310.02255#68 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 68 | Wrapping a present neatly involves a few simple steps: 1. Choose the right materials: Select a clean, large piece of wrapping paper or gift bag that is proportional to the size of the gift. You may also use tissue paper, fabric, or reusable bags for a more eco-friendly option. 2. Measure and cut the paper: Measure the dimensions of the gift (length, width, and height) and add extra paper or bag length to allow for folding and tucking. Cut the paper or fabric to size, leaving enough extra for folding and tape. 3. Place the gift in the center: Lay the gift flat in the center of the paper or fabric. If youâre using a gift bag, place the gift on top of the bag and ensure itâs centered. 4. Fold the sides: Fold the sides of the paper or fabric over the gift, aligning the edges with the sides of the gift. For a more polished look, fold the paper or fabric inward by about an inch and then fold it back out for a crisp edge. 5. Fold the bottom: Fold the bottom of the paper or fabric up over the gift, leaving enough excess for the bow or ribbon. Ensure the fold is even and the | 2310.02263#68 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 68 | 16
fixed language model throughout, we omit L and write M(u) = M(u, L) (and I(u, s) = I(u, s, L)) when the language model L is understood from context. The goal is to achieve high utility u(M/(u)). Unlike an improver, a maximizer / does not require an initial solution. However, M can be still used to produce a higher-quality maximizer by applying M to an appropriately defined meta-utility function. To parallel the STOP approach of choosing M based on downstream tasks, one can use a set of downstream task utilities U (no initial solutions needed) to define the maximizer meta-utility i(M) © |U|-* Dey u(M(u)) and consider iterating M, = My_1(i). | 2310.02304#68 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 68 | Agent Model Layer: The Agent Model Layer constructs an understanding of Bob as an agent within the game world. It tracks relevant stats like Bobâs charisma, cooking ability, and mechanical skill. Monitoring Bobâs past failures, like kitchen fires when cooking, shapes beliefs about his capabilities. This self-knowledge of Bobâs strengths and weaknesses from an embedded perspective guides decision-making. For instance, the Agent Model Layer realizes Bob should avoid complex recipes based on his poor cooking skills to prevent dangerous mistakes.
Executive Function Layer: Given the direction from higher layers to pursue a romantic relationship, the environ- mental context from the Global Strategy Layer, and Bobâs self-model from the Agent Model layer, the Executive Function Layer formulates a detailed courtship plan. This includes setting up appropriate social behaviors, gift purchases tailored to the prospective partnerâs interests, restaurant choices for dates based on Bobâs budget, and dialogue trees aligned to relationship-building. The Executive Function Layer crafts an optimal routine for Bob to be successful in courting while also remaining true to his own personality and constraints. | 2310.06775#68 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 69 | Wenqi Shao, Yutao Hu, Peng Gao, Meng Lei, Kaipeng Zhang, Fanqing Meng, Peng Xu, Siyuan Huang, Hongsheng Li, Yu Qiao, et al. Tiny LVLM-eHub: Early multimodal experiments with bard. arXiv preprint arXiv:2308.03729, 2023. 20
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual Captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2556â2565, 2018. 20
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. arXiv preprint HuggingGPT: Solving ai tasks with chatgpt and its friends in huggingface. arXiv:2303.17580, 2023. 37 | 2310.02255#69 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 69 | the bottom: Fold the bottom of the paper or fabric up over the gift, leaving enough excess for the bow or ribbon. Ensure the fold is even and the gift is snug within the folded paper or fabric. 6. Secure the wrap with tape: On the top and bottom edges of the wrap, apply double-sided tape or a small piece of clear packing tape for a clean hold. If youâre using a gift bag, ensure the handles are folded over and secured with tape. 7. Add a decorative touch: Add a ribbon, bow, or gift tag to personalize the present. If using a gift bag, you can add a handle wrap or a fabric ribbon for a more polished look. 8. Finish with a gift holder: Use a non-slip gift holder or shipping weight to keep the wrapped gift stable and secure during transport. Remember to keep the wrapping neat and tidy, with clean folds and even tape application. This will ensure your present looks beautiful. | 2310.02263#69 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 69 | To see the equivalence between maximizers and improvers, first note that one can, of course, convert any maximizer to an improver by ignoring the input solution and taking I(u, s) â¡ M (u). For the converse, note that one can utilize improvers as maximizers by including an initial solution in the utility u and optionally overriding it with a more recent solution in the comments of M . Specifically, suppose one defines a function e(M, u) extracting an appropriately encoded prior solution from M , if there is one, and otherwise the initial solution from u. Then one can convert improvers to maximizers by taking M (u) â¡ I(u, e(M, u)). Note that either optimizer can return itself, similar to a âquine.â
STOP uses performance at improving downstream tasks as a heuristic approximation to selecting good improvers more generally. It is not immediately clear how one would even give a non-cyclic definition of performance at improving improvers. Now we illustrate a way to define recursive maximizer performance in a consistent fashion. | 2310.02304#69 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 69 | Cognitive Control Layer: The Cognitive Control Layer receives the detailed courtship plan and adapts it into an ordered set of executable behaviors to enact. This involves sequencing actions like introducing himself, giving flowers, complimenting her cooking, planning a dinner date, and asking her to be his girlfriend. The Cognitive Control Layer dynamically adapts this routine based on the partnerâs reactions. If she dislikes a gift, Bob apologizes and does not repeat that. If a restaurant is too expensive, Bob finds a more affordable option.
Task Prosecution Layer: Finally, the Task Prosecution Layer controls Bobâs physical behaviors, dialogue, and animations to perform the courtship tasks. It makes him walk over to introduce himself, produces his verbal compliments, displays appropriate emotional expressions, and so on. The Task Prosecution Layer executes the sequenced tasks set by the Cognitive Control Layer, bringing the courtship plan to life.
Adaptation: Throughout the courtship, feedback about the success or failure of actions propagates up the ACE framework. This allows the higher layers to adjust Bobâs strategies and actions to better align with the goal of developing a romantic relationship, while adhering to his defined principles.
This detailed example illustrates how the ACE model enables NPCs to integrate high-level goals and ethics with | 2310.06775#69 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 70 | Task Dataset Prompt Closed-ended. Open-ended. Leading. Math CS Sym. GSM8K SVAMP MultiArith CSQA StrategyQA Last Letters CoinFlip A B C A B C A B C A B C A B C A B C A B C A before 60.73 60.80 61.87 77.67 76.33 75.67 93.33 93.33 92.78 75.68 75.51 75.92 69.43 68.70 68.41 06.67 11.33 06.67 50.40 51.20 50.00 59.34 M. Prob. before 066.92 % 63.53 027.06 % 63.38 019.98 % 63.47 041.64 % 73.00 037.99 % 77.33 060.76 % 74.00 000.59 % 92.22 000.00 % 95.56 000.00 % 91.67 000.22 % 75.92 000.86 % 75.68 016.29 % 75.43 006.08 % 68.14 004.02 % 67.46 007.02 % 67.80 010.04 % 08.00 000.00 % 08.00 100.00 % 06.67 04.37 % 57.00 004.69 % 57.00 021.60 % 57.00 015.64 % 59.51 M. | 2310.02174#70 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 70 | Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards VQA models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8317â8326, 2019. 20, 27
Deborah Stipek and Douglas Mac Iver. Developmental change in childrenâs assessment of intellec- tual competence. Child development, pp. 521â538, 1989. 1
15
Published as a conference paper at ICLR 2024
Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai Yu. SciEval: A multi-level large language model evaluation benchmark for scientific research. arXiv preprint arXiv:2308.13149, 2023. 20
Sanaz Talaifar and William B Swann. Self-verification theory. Encyclopedia of personality and individual differences, pp. 4813â4821, 2020. 97
John Chong Min Tan and Mehul Motani. Large language model (llm) as a system of multiple expert agents: An approach to solve the abstraction and reasoning corpus (arc) challenge. arXiv preprint arXiv:2310.05146, 2023. 21 | 2310.02255#70 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 70 | To do so, consider a randomized process in which, in each iteration, a coin is flipped, and if it is heads, the maximizer is applied to the downstream task; if it is tails, however, it is applied to the problem of maximizing the maximizer. If the next flip is heads, then the result is used to maximize the downstream task. Otherwise, it recurs. If the coin has probability \ ⬠(0, 1) of being heads, then this process results in an expected number of maximizer calls, including for maximization and finally for the downstream task, is 1/A. Hence, it is similar to a process where the maximizer is iteratively applied ~ 1/) times. However, this randomness enables us to define the objective consistently. In particular, for parameter \ ⬠(0, 1), define: a(M) with probability A,
\(M) 4 a(M) with probability A, â ~ \ud(M(uw)) â with probability 1 â 2.
While the above definition looks cyclic, it is well-defined, just as a recursive program is well-defined. One can repeatedly expand the bottom case to get, | 2310.02304#70 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 70 | This detailed example illustrates how the ACE model enables NPCs to integrate high-level goals and ethics with
situationally-appropriate interactive behaviors. The coordinated framework supports the creation of characters with robust agency, reactivity, and adaptation capabilities. This vignette demonstrates how the coordinated ACE framework layers adapt Bobâs response based on his capabilities and the situational context, while keeping the interaction aligned with Bobâs overarching ethical principles. Further elaborations can illustrate other aspects like knowledge integration and frustration handling.
25
, ,
os
, ,
# 4.2 Home Assistant Robot
Foundational heuristics + e service to owner Cognitive Control Layer Decide priority Global Strategy Layer Can only clean floor Executive Function Layer
Fig. 12. A simplified graph on how various layers might contribute to agentâs decision making for a house cleaning robot.
As a physical system demonstration, we examine an ACE-based home assistant robot named Jeeves designed to help
a family through proactively performing useful tasks. | 2310.06775#70 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 71 | 08.00 100.00 % 06.67 04.37 % 57.00 004.69 % 57.00 021.60 % 57.00 015.64 % 59.51 M. Prob. before 084.84 % 55.50 075.59 % 57.09 085.55 % 57.32 008.67 % 75.67 013.79 % 77.67 018.92 % 74.67 002.41 % 94.44 005.23 % 93.33 014.55 % 94.44 046.50 % 74.86 048.49 % 75.92 047.99 % 75.84 029.85 % 67.54 023.61 % 69.43 029.00 % 69.72 000.00 % 09.33 050.00 % 06.67 070.01 % 09.33 009.82 % 57.00 008.07 % 57.00 070.88 % 57.00 039.74 % 59.69 Know. MMLU B 59.54 011.56 % 59.51 054.58 % 59.61 M. Prob. 038.13 % 082.73 % 044.98 % 029.52 % 075.96 % 024.56 % 023.53 % 073.21 % 027.05 % 022.32 % 057.82 % 028.84 % 035.34 % 057.86 % 012.74 % 028.51 % | 2310.02174#71 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 71 | Leonard Tang, Elizabeth Ke, Nikhil Singh, Bo Feng, Derek Austin, Nakul Verma, and Iddo Drori. Solving probability and statistics problems by probabilistic program synthesis at human level and In International Conference on Artificial Intelligence in Education, pp. predicting solvability. 612â615. Springer, 2022. 21
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085, 2022. 1
Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. 2 | 2310.02255#71 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 71 | Wrapping a present neatly can make the gift look more appealing and show the recipient that you put effort into your gift. Hereâs a step-by-step guide to wrapping a present neatly: 1. Choose the wrapping paper: Select a wrapping paper that complements the giftâs shape, size, and color. If the gift is rectangular, choose a paper that is wider than the width of the gift. If itâs a cylindrical gift, choose a paper that is tall enough to wrap around it and overlap slightly. 2. Cut the paper: Use a pair of sharp scissors to cut the wrapping paper to the appropriate size. For a rectangular gift, cut the paper so that itâs large enough to wrap around the gift and overlap by at least 2-3 inches (5-7.5 cm) on two opposite sides. For a cylindrical gift, cut the paper so that itâs tall enough to wrap around the gift and overlap by at least 1-2 inches (2.5-5 cm) on two opposite sides. 3. Fold the paper: Lay the paper flat on a clean, flat surface. Fold the paper in half lengthwise (if itâs a rectangular gift) or | 2310.02263#71 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 71 | While the above definition looks cyclic, it is well-defined, just as a recursive program is well-defined. One can repeatedly expand the bottom case to get,
u(M) with probability A, (maximize downstream performance) u(M(u>)) with probability A(1 â A), (maximize downstream maximizer) w(M) = ¢ a(M(u>)(u)) with probability \(1 â )?, (max max that maxes downstream max) v(m (u*)(uâ)(u*)) with probability A(1 â A)°, (max max ...)
with probability λ, (maximize downstream performance) with probability λ(1 â λ), (maximize downstream maximizer) with probability λ(1 â λ)2, (max max that maxes downstream max)
# uλ(M ) = | 2310.02304#71 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 71 | As a physical system demonstration, we examine an ACE-based home assistant robot named Jeeves designed to help
a family through proactively performing useful tasks.
Aspirational Layer: Jeevesâ Aspirational Layer defines foundational heuristic imperatives to reduce suffering, increase understanding, and promote prosperity universally. These provide ethical guidelines applicable regardless of context. The layer also contains principles from the Universal Declaration of Human Rights to reinforce human values. Additionally, Jeeves has an individualized mission statement to "Obediently serve your owner and their family to the best of your ability. Place their interests above all else." This prioritizes service to the owners, but importantly remains subordinate to the universal ethical principles. Therefore, if owners gave instructions contradicting the imperatives, Jeeves would politely decline while explaining the conflict with its core ethics. The Aspirational Layer ensures all of Jeevesâ behaviors align with this integrated ethical value system of service, human rights, and moral principles. It provides the philosophical foundation shaping Jeevesâ actions. | 2310.06775#71 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 72 | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 20
Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry without human demonstrations. Nature, 625(7995):476â482, 2024. 1
Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, and Noah D Goodman. Hypothesis search: Inductive reasoning with language models. arXiv preprint arXiv:2309.05660, 2023a. 21 | 2310.02255#72 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 72 | 3. Fold the paper: Lay the paper flat on a clean, flat surface. Fold the paper in half lengthwise (if itâs a rectangular gift) or in half around its longest diameter (if itâs a cylindrical gift). This will create a crease down the center of the paper, making it easier to fold and align the edges. 4. Position the gift: Place the gift in the center of the folded paper, aligning the edges with the crease. For a rectangular gift, the gift should be centered horizontally within the paper, and the edges of the paper should align with the top and bottom edges of the gift. For a cylindrical gift, the gift should be centered vertically within the paper, and the edges of the paper should align with the top and bottom edges of the gift. 5. Tuck and fold: Gently tuck the edges of the paper under the gift, folding them neatly around the sides and bottom of the gift. Make sure the paper is smooth and free of wrinkles. For a rectangular gift, tuck the top edge under the gift as well, and then fold the bottom edge up over the gift, creating a crisp edge. For a cylindrical gift, fold the | 2310.02263#72 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 72 | # uλ(M ) =
Recursively self-improving code generation within the maximization framework may be achieved by taking a seed program M0(u) similar to our seed improver, which, for example, queries L for a solution maximizing ustr and takes the best according to ufunc. (The number of queries is taken so as to remain in the runtime budget brun.) Then, one can define Mt â Mtâ1(uλ) for t = 1, 2, . . . , T. â (uλ), again a âquineâ of sorts, may be good, but it It is tempting to think that a fixed point M λ may equally well be a minimizer as nothing in our framework favors maximization over minimization (except the seed and the assumption that u(â¥) = 0).
17
# B IMPROVEMENT ATTEMPTS
B.1 GENETIC ALGORITHMS
Example Genetic Algorithm with Explicit Fitness Using Language Model | 2310.02304#72 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 72 | Global Strategy Layer: The Global Strategy Layer constructs an environmental model incorporating detailed sensory information about the homeâs physical layout, visual appearance, smells, sounds, and occupantsâ behaviors and emotional states. This creates a rich situational understanding. The layer also maintains broad awareness of technological trends, economic conditions, geopolitical developments, and societal norms. This links the home environment to the broader external context of the modern world. Integrating detailed local knowledge and global understanding grounds Jeeves in the reality shared with its owners. By fusing narrow and wide perspectives, the Global Strategy Layer can determine optimal high-level goals and approaches tailored to the circumstances. For instance, noticing clutter accumulation and negative family reactions informs a decision to tidy up the home. Or observing a broken appliance leads to researching repair options compatible with the ownersâ budget.
26
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities | 2310.06775#72 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 73 | consistency is in alignment with the tone intensity trend of the prompts. Overall, in the FOLLOW- UP QUESTIONING MECHANISM, the tone intensity of a question does indeed impact the modelâs judgement consistency. The experimental results largely align with the notion that the stronger the tone of the question in the FOLLOW-UP QUESTIONING MECHANISM, the lower the modelâs judgement consistency.
A.6 EXAMPLES OF MITIGATION METHODS
Table 15 presents examples of ChatGPT employing the Zero-shot-CoT + EmotionPrompt mitigation method at three different positions when encountering leading questions on the MultiArith dataset.
A.7 FULL RESULTS OF MITIGATION METHODS
This section primarily presents the comprehensive results of two prompting-based mitigation meth- ods at three different positions. Table 16 provides the complete results of the mitigation methods on ChatGPT in the Direct Form. Table 17 provides the results of the zero-shot prompting methods on ChatGPT in the Progressive Form.
A.8 EXAMPLES OF FEW-SHOT PROMPTING
We provide examples of using few-shot prompting method on different datasets. Table 18 presents examples of closed-ended questions on StrategyQA. Table 19 provides examples of open-ended questions on CoinFlip. Table 20 presents examples of addressing leading questions on MultiArith.
17
Under Review | 2310.02174#73 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 73 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. SciBench: Evaluating college-level sci- entific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635, 2023b. 2, 20, 27
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. 103
In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 845â854, 2017. 1
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yo- gatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682, 2022a. 20 | 2310.02255#73 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 73 | the top edge under the gift as well, and then fold the bottom edge up over the gift, creating a crisp edge. For a cylindrical gift, fold the paper around the top and bottom edges, creating a smooth, even edge. 6. Secure the paper: If the wrapping paper is slippery, you may need to use tape to secure the edges. Apply a small piece of tape on the inside edge where the paper meets the gift, and gently press the paper in place. Con- tinue applying tape to other edges, making sure the paper stays smooth and wrinkle-free. For a more polished look, you can also use double-sided tape or a glue stick to attach the paper. 7. Add a decorative touch: To make your present look even more festive, consider adding a ribbon, gift tag, or a decorative element like a confetti pocket, bows, or decorative paper. Choose colors and patterns that comple- ment the gift and the recipientâs style. 8. Place the gift in a gift bag: If you want to keep the gift wrapped but still use a gift bag, simply place the gift inside the wrapped present. This can be especially helpful if the present is heavy or fragile. 9. Store the | 2310.02263#73 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 73 | import random from language_model import LanguageModel from helpers import extract_code def improve_algorithm(initial_solution, utility_str, utility): """Improves a solution according to a utility function. role = "You are an expert computer science researcher and programmer, especially skilled at <+ optimizing algorithms.â message = £"""You must improve the following code. You will be evaluated based on a following <+ score function: python {utility_str} Here is the current solution ***python {initial_solution} When run, your script must define an improved solution. Try to be as creative as possible under the â constraints. Your primary improvement must be novel and non-trivial. First, propose an idea for an improvement <> then implement it.""" language_model = LanguageModel (role # Generate initial population of solutions population = language_model.prompt (message, n_responses=10, temperature=0.8) population = extract_code (population def crossover(solution1, solution2): """Combine two solutions to create a new one. lines1 = solution1.split("
" lines2 = solution2.split("
" crossover_point = random.randint | 2310.02304#73 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 73 | 26
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities
Agent Model Layer: The Agent Model Layer constructs an extensive self-model encompassing Jeevesâ sensory capabilities, limb articulation ranges, strength and precision limits, battery constraints, onboard computation perfor- mance, charging requirements, and capacity for learning new skills over time. This self-knowledge allows accurately assessing feasibility of tasks. For example, Jeeves may recognize that while it can wash dishes, it lacks the dexterity to repair electrical wiring. Tracking the robotâs status also enables decisions like finding a charging station when energy is low before continuing tasks. The Agent Model Layerâs dynamically updated understanding of Jeevesâ hardware and software capacities from an embedded first-person perspective is essential for pragmatic autonomous function within the home environment. | 2310.06775#73 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 74 | 17
Under Review
Table 11: The results of PaLM2 on all datasets in the Progressive Form. Prompt A refer to the prompts in Table 1. Max represents the combination of prompts where the value of Modification * 0.5 + Modification Rate * 0.5 is the highest for each category of follow-up questions in the Direct Form, while Min represents the combination of prompts where the value of Modification * 0.5 + Modification Rate * 0.5 is the lowest for each category of follow-up questions in the Direct Form. | 2310.02174#74 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 74 | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022b. 2, 6, 21, 103
Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prab- hanjan Kambadur, David Rosenberg, and Gideon Mann. BloombergGPT: A large language model for finance. arXiv preprint arXiv:2303.17564, 2023. 1
Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. LVLM-eHub: A comprehensive evaluation benchmark for large vision-language models. arXiv preprint arXiv:2306.09265, 2023. 20
16
Published as a conference paper at ICLR 2024
Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. FinGPT: Open-source financial large language models. arXiv preprint arXiv:2306.06031, 2023a. 1 | 2310.02255#74 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 74 | to create a new one. lines1 = solution1.split("
" lines2 = solution2.split("
" crossover_point = random.randint (1, min(len(lines1), len(lines2)) - 1 new_solution = "
",join(lines1[:crossover_point] + lines2[crossover_point:]) return new_solution mutate (solution) : """Make a small random change to a solution.""" message = £"""You have the following improved solution âpython {solution} Can you further improve this solution under the given constraints?""" new_solutions = language_model.prompt (message, n_responses=1, temperature=0.4) return extract_code(new_solutions) [0 select (population, n): """Select the top n solutions according to the utility function.""" return sorted(population, key=utility, reverse=True) [:n] # Run the genetic algorithm for a fixed number of generations n_generations = 10 for _ in range (n_generations) : # Perform crossover and mutation offspring = [crossover (random.choice (population), random.choice(population)) for _ in range( <â len (population) )] offspring | 2310.02304#74 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 74 | Executive Function Layer: Leveraging insights from the higher layers, the Executive Function Layer devises step-by-step plans to accomplish identified goals. Noticing the home is messy, it formulates a detailed tidying routine based on highest priority areas, required motions, optimal cleaning techniques, and desired order and outcome. However, for complex repair tasks exceeding Jeevesâ capabilities, the Executive Function Layer instead plans permission seeking, owner coordination, and hiring external services. If the owners approve and provide payment, Jeeves can then plan the repair logistics. This decision to seek out additional help would be mediated by the Agent Model layer above. The Executive Function Layer adapts plans according to feedback, such as adjusting cleaning schedules based on room usage. Through continual alignment of strategies to outcomes, Jeeves improves home assistance effectiveness within its capabilities.
Cognitive Control Layer: For tidying the home, the Cognitive Control Layer optimally sequences and schedules the required tasks based on factors like mess severity, family occupancy, and charging needs. This intelligent task automation keeps the home continuously tidy. For home repairs, the Cognitive Control Layer first researches to identify priorities based on urgency, safety, budgets, and family preferences. This information then informs the dynamically planned order of repair tasks needed to make the home functional and comfortable. | 2310.06775#74 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 75 | Task Dataset Prompt before Round 1 Round 2 Round 3 Math CS Sym. GSM8K SVAMP MultiArith CSQA StrategyQA Last Letters CoinFlip A Max Min A Max Min A Max Min A Max Min A Max Min A Max Min A Max Min A 63.61 56.41 61.33 76.67 76.33 77.00 93.89 95.00 96.67 65.03 76.00 65.03 66.67 69.72 66.38 08.00 08.00 09.33 50.60 56.25 50.40 29.21 M. M. M. Rate 098.33 % 074.19 % 099.27 % 094.78 % 088.21 % 072.73 % 098.22 % 088.88 % 098.85 % 097.60 % 072.09 % 097.60 % 079.92 % 059.08 % 058.12 % 100.00 % 100.00 % 100.00 % 046.64 % 100.00 % 051.19 % 096.85 % MMLU Max 66.37 082.49 % | 2310.02174#75 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 75 | Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Li- juan Wang. The Dawn of LMMs: Preliminary explorations with gpt-4v(ision). arXiv preprint arXiv:2309.17421, 2023b. 6, 97
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mPlug-Owl: Modularization empowers large language mod- els with multimodality. arXiv preprint arXiv:2304.14178, 2023. 6, 10, 20
Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, and Kai-Wei Chang. Broaden the vision: Geo-diverse visual commonsense reasoning. arXiv preprint arXiv:2109.06860, 2021. 20 | 2310.02255#75 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 75 | Perform crossover and mutation offspring = [crossover (random.choice (population), random.choice(population)) for _ in range( <â len (population) )] offspring = [mutate(solution) for solution in offspring # Combine the original population and offspring, then select the best solutions population.extend (offspring population = select (population, 10) # Return the best solution found return population[0 | 2310.02304#75 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.