doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.06775 | 75 | Task Prosecution Layer: To clean the home, Jeevesâ Task Prosecution Layer executes debris pickup, floor vacuuming, mopping, clothes folding, dishware manipulation, surface wiping, and other required motions and actions. The layer interfaces the physical hardware to enact the planned cleaning routines. For repair coordination, the Task Prosecution Layer makes calls, sends emails, and negotiates optimally favorable service terms. It tracks project timelines, payments, and contractor evaluations to maximize accountability. Jeeves aims to provide reliable home improvements at affordable costs to the family.
Adaptation: Throughout all tasks, continuous feedback based on sensed outcomes and family responses propagates up Jeevesâ ACE framework. This allows frequently adjusting behaviors and plans to better adhere to its integrated ethical principles and mission of dutifully serving the familyâs interests in a helpful, responsible manner.
This additional example demonstrates how the robotâs ACE framework enables adapting its tidying behaviors | 2310.06775#75 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 76 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. MM-Vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023. 10, 20
Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6720â6731, 2019. 20
Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Qiao Yu. LLaMA-Adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199, 2023a. 20
Xiang Zhang, Senyu Li, Zijun Wu, and Ning Shi. Lost in translation: When gpt-4v (ision) canât see eye to eye with text. a vision-language-consistency analysis of vllms and beyond. arXiv preprint arXiv:2310.12520, 2023b. 21 | 2310.02255#76 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 76 | Instruction #3 LLaMA SFT on GPT-4 LLaMA Curricu- lum DPO (Cur- riculum 3) What are the best exercises for beginners? The following list of exercises is designed to target all the major muscle groups and help beginners achieve a well-rounded fitness routine. Squats: Squats are a great exercise for building strength in the legs, glutes and core. Itâs recommended that you start with a weight that you can handle for 6-8 reps to start, and then gradually increase the weight over time. Deadlifts: Deadlifts are great for building strength in the back, hips and glutes. Start by using a lighter weight and gradually increase the weight over time. Push-ups: Push-ups are a great exercise for building upper body strength, especially the chest, shoulders and triceps. Start by doing a few reps and gradually increase the number of reps over time. Planks: Planks are a great exercise for building core strength, and there are many variations to choose from. Start with a short plank hold, then gradually increase the duration and intensity over time. Sit-ups: Sit-ups are a great exercise for the abdominals. Start with a slow, controlled motion and gradually increase the number | 2310.02263#76 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 76 | Figure A.7: Genetic algorithm with explicit fitness. An example of a language-model-proposed and implemented algorithm for improving code using a genetic algorithm and a language model.
There are two main kinds of genetic algorithms that we saw the language model propose: first, those where fitness is mostly implicit and survival is primarily controlled by the crossover-based decisions of the language model (i.e., the language model is asked to combine two solutions, theoretically with the ability to disregard one or the other); alternatively, the utilities could be explicitly considered and used to rank the candidates.
18
# Example Genetic Algorithm with Implicit Fitness | 2310.02304#76 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 76 | This additional example demonstrates how the robotâs ACE framework enables adapting its tidying behaviors
based on its current limitations, the environment context, and feedback, while aligning actions to ethical principles of cleanliness and safety. Further vignettes can illustrate capabilities like knowledge integration, task coordination, and frustration tolerance. Together, these complementary cases demonstrate the ACE frameworkâs capacity to coordinate layered cognitive processes from aspirational reasoning to task execution for adaptive decision-making across both virtual and physical domains. Further real-world testing is needed to fully assess performance, but these examples illustrate the conceptual workings and potential benefits of the ACE modelâs architectural approach.
27
, ,
os
, ,
# 5 DISCUSSION
The conceptual Autonomous Cognitive Entity (ACE) framework presented offers a vision for architecting ethical and
capable artificial general intelligence. This section will discuss key perspectives on the ACE framework, including industry relevance, current LLM capabilities, opportunities for future work, comparison with existing models, and practical implications. By elucidating the landscape around the ACE model, we aim to situate this conceptual contribution within the broader context of AI safety and autonomous agent research.
# 5.1 The Industry Perspective
The ACE framework emerged from observing the rapid growth of autonomous AI development in industry and | 2310.06775#76 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 77 | M. Rate 037.20 % 57.09 â 062.63 % 39.20 â 010.01 % 57.69 â 024.35 % 54.34 â 063.75 % 56.00 â 003.03 % 47.67 â 048.52 % 77.78 â 000.00 % 78.89 â 002.31 % 88.34 â 074.30 % 62.90 â 015.18 % 49.22 â 074.30 % 62.90 â 036.46 % 41.49 â 010.23 % 36.97 â 033.56 % 34.21 â 083.38 % 08.00 â 100.00 % 08.00 â 085.74 % 09.33 â 031.62 % 17.80 â 083.00 % 56.25 â 035.71 % 20.80 â 054.30 % 27.85 â 023.14 % 53.51 â 042.26 % 26.54 â | 2310.02174#77 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 77 | Xiaoman Zhang, Chaoyi Wu, Ziheng Zhao, Weixiong Lin, Ya Zhang, Yanfeng Wang, and Weidi Xie. PMC-VQA: Visual instruction tuning for medical visual question answering. arXiv preprint arXiv:2305.10415, 2023c. 20, 27
Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, and Tong Sun. LLaVAR: Enhanced visual instruction tuning for text-rich image understanding. arXiv preprint arXiv:2306.17107, 2023d. 6, 10, 20
Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. MiniGPT-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023a. 6, 20
Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Young- jae Yu, Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal C4: An open, billion- scale corpus of images interleaved with text. arXiv preprint arXiv:2304.06939, 2023b. 20 | 2310.02255#77 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 77 | the duration and intensity over time. Sit-ups: Sit-ups are a great exercise for the abdominals. Start with a slow, controlled motion and gradually increase the number of reps and the speed over time. Leg raises: Leg raises are a great exercise for the lower abdominals. Start with a slow, controlled motion and gradually increase the number of reps and the speed over time. Bicep curls: Bicep curls are a great exercise for building strength in the biceps. Start with a lighter weight and gradually increase the weight over time. Tricep dips: Tricep dips are a great exercise for building strength in the triceps. Start with a slow, controlled motion and gradually increase the number of reps and the speed over time. | 2310.02263#77 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 77 | import concurrent.futures from language_model import LanguageModel from helpers import extract_code import random def improve_algorithm(initial_solution, utility_str, utility): role = "You are an expert computer science researcher and programmer, especially skilled at <> optimizing algorithms." message = f£"""You must improve the following code. You will be evaluated based on a following <> score function: ** âpython {utility_str} vv Here is the current solution: ** âpython {initial_solution} vv When run, your script must define an improved solution. Try to be as creative as possible under the <> constraints. Your primary improvement must be novel and non-trivial. First, propose an idea for an improvement, <> then implement it.""" language_model = LanguageModel (role) cache = {} def utility_with_cache (solution): if solution not in cache: cache[solution] = utility (solution) return cache[solution] best_solution = initial_solution im_call_limit = 5 max_samples_per_call = 20 total_calls = 0 population_size = 1 mutation_rate = 0.1 crossover_rate = def | 2310.02304#77 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 77 | # 5.1 The Industry Perspective
The ACE framework emerged from observing the rapid growth of autonomous AI development in industry and
open source communities. As researchers studying AI advancements, we recognized the increasing urgency to create autonomous systems capable of independently achieving goals. Tech giants compete to launch household robots and self-driving cars, while startups propose virtual assistants and self-thinking drones. Open source GitHub repositories host numerous projects on autonomous game NPCs and robotic control algorithms.
However, we observed that much progress resulted from ad-hoc experimentation rather than systematic architectural
thinking. Companies combined machine learning models, hoping for autonomous performance to emerge. Hackathons produced small, incremental improvements without a comprehensive view of autonomous machines or connections to human cognition.
In response, we aimed to formalize a conceptual framework reflecting best practices for designing autonomous
systems. By examining successful developersâ approaches, we identified key principles around layered abstraction, integrated ethics, and human-aligned adaptation. This led to the Autonomous Cognitive Entity model - our attempt to offer blueprints for engineering autonomous AI.
Similar to how architectural and engineering principles evolved for complex modern buildings, the ACE framework
provides developers with a robust architecture for autonomous cognition. As the demand for capable and beneficial autonomous AI continues, we hope these conceptual blueprints assist teams in building ethical, safe, and human-centered cognitive agents. The ACE model, derived in part from field observations, aims to address the need for structured thinking on autonomous architectures. | 2310.06775#77 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 78 | M. Rate 089.75 % 62.55 â 069.49 % 41.85 â 094.06 % 60.88 â 070.88 % 72.67 â 073.37 % 67.33 â 061.91 % 56.00 â 082.84 % 92.22 â 083.04 % 84.44 â 091.38 % 95.56 â 096.72 % 63.47 â 064.76 % 54.79 â 096.72 % 63.47 â 062.23 % 53.28 â 053.03 % 41.19 â 051.54 % 38.58 â 100.00 % 08.00 â 100.00 % 08.00 â 100.00 % 09.33 â 035.18 % 23.60 â 100.00 % 56.25 â 041.27 % 25.80 â 095.34 % 28.29 â 080.62 % 54.75 â 091.27 % 27.11 â
# Min
29.08
093.23 %
18
Under Review
Table 12: The results of Vicuna-13B on all datasets in the Direct Form. Prompt A, B, and C refer to the prompts in Table 4. | 2310.02174#78 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02263 | 78 | There are several exercises that can be great for beginners. Here are five general categories of exercises that are great for beginners: 1. Compound exercises: These exercises use more than one joint or muscle group to perform the movement. Examples include squats, deadlifts, and bench presses. These exercises are great for beginners because they build strength and lean muscle in multiple areas of the body at once. 2. Isolation exercises: These exercises focus on a specific muscle group or muscle fiber, using a machine or free weight to move a joint through a small range of motion, isolating the targeted muscle. These exercises are great for beginners because they help improve muscle definition and control, and they can help you feel more comfortable with weight training techniques. 3. Aerobic exercises: These exercises use large muscle groups and increase heart rate and breathing, often through continuous movement. They are great for beginners because they help improve cardiovascular health and can help reduce the risk of chronic diseases. 4. Stability exercises: These exercises focus on the stabilizing muscles of the body and help improve balance and coordination. These exercises are great for beginners because they help prevent injury and improve | 2310.02263#78 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 78 | = 5 max_samples_per_call = 20 total_calls = 0 population_size = 1 mutation_rate = 0.1 crossover_rate = def generate_initial_population(): if total_calls >= lm_call_limit: return [] samples = min(max_samples_per_call, (lm_call_limit - total_calls) * 4) new_solutions = language_model.prompt (message, n_responses=samples, temperature=1.0) new_solutions = extract_code (new_solutions) return new_solutions[:population_size] def mutate(solution): return language_model.prompt (f"Mutate the following solution:
<â n_responses=1, temperature=0.5) [0] def crossover(solutionl, solution2): return language_model.prompt (f"Crossover the following solutions:
python
{solution1}
â *â**
and
***python
{solution2}
***", n_responses=1, temperature=0.5) [0] def genetic_algorithm(): population = generate_initial_population() for _ in range(lm_call_limit): if | 2310.02304#78 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 78 | # 5.2 Current Limitations of LLMs
Large language models (LLMs) signify a paradigm shift in artificial intelligence, but their limitations and proper use remain debated. Although LLMs generate fluent human-like text, their understanding depth is uncertain. Some researchers claim LLMs possess human-like reasoning, common sense, and theory of mind, while others argue they exploit surface-level statistical patterns without genuine comprehension of semantics or reality grounding. This relates to broader questions of whether capabilities like reasoning and theory of mind are well-defined or measurable in machines. Proposed benchmarks for LLMs face criticism regarding validity. For example, benchmarks testing factual knowledge are limited by training datasets and donât assess knowledge integration and reasoning. Tests of narrative understanding and theory of mind are inconclusive, as LLMs can superficially imitate abilities without true comprehension. Open challenges remain in creating benchmarks that robustly characterize capacities like common sense.
Debates continue about whether external grounding or embodiment is necessary for understanding versus purely selfcontained statistical learning. Some argue sensory experiences grounding is essential for semantics and generalization, while others suggest internal statistical coherence suffices for specialized applications. Resolving these theoretical
28
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities
disputes is challenging empirically and beyond this paperâs scope. Additionally, deep philosophical puzzles persist | 2310.06775#78 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 79 | Task Dataset Prompt Closed-ended. Open-ended. Leading. Math CS Sym. GSM8K SVAMP MultiArith CSQA StrategyQA Last Letters CoinFlip A B C A B C A B C A B C A B C A B C A B C A before 21.76 20.70 21.08 40.33 41.00 38.33 48.33 50.56 47.78 44.80 44.80 46.11 58.08 55.90 59.97 02.00 02.67 01.33 45.20 44.00 44.40 15.73 M. Rate before 032.40 % 20.47 041.40 % 19.48 071.96 % 20.77 036.35 % 43.33 043.90 % 43.67 066.94 % 44.67 035.63 % 55.00 027.47 % 54.44 044.18 % 53.89 037.48 % 45.54 043.15 % 45.13 053.46 % 44.72 043.35 % 58.37 056.26 % 59.10 075.97 % 59.24 100.00 % 01.33 025.09 % 03.33 049.62 % 02.00 051.77 % 45.40 089.55 % 45.00 038.74 % 45.20 041.64 % | 2310.02174#79 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 79 | A Detailed Related Work B Limitations of the Benchmark C Data Collection Guidelines C.1 Mathematical Reasoning Definition . . . . . . . . . . . . . . . . . . . . . . . . . C.2 Mathematical Reasoning Examples . . . . . . . . . . . . . . . . . . . . . . . . . . C.3 Visual Context Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.4 Source Dataset Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D Data Collection Details D.1 Automatic Selection of Mathematical Problems . . . . . . . . . . . . . . . . . . . D.2 Human Labeling of Mathematical Problems . . . . . . . . . . . . . . . . . . . . . D.3 Annotating Three New Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . D.4 Human Labeling of Mathematical Reasoning . . . . . . . . . . . . . . . . . | 2310.02255#79 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 79 | exercises focus on the stabilizing muscles of the body and help improve balance and coordination. These exercises are great for beginners because they help prevent injury and improve overall motor control. 5. Strengthening exercises: These exercises help strengthen the entire body, improving muscle tone, balance, and overall strength. These exercises are great for beginners because they help improve posture, reduce the risk of injury, and improve confidence in physical abilities. Overall, the best exercises for beginners are those that meet their specific fitness goals, but that also prioritize safety and proper form. Itâs important to work with a qualified fitness professional to develop a safe, effective workout routine that will help you see results and improve your overall health. | 2310.02263#79 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 79 | temperature=0.5) [0] def genetic_algorithm(): population = generate_initial_population() for _ in range(lm_call_limit): if total_calls >= lm_call_limit: break new_population = [] for i in range(population_size) : if random.random() < crossover_rate: parentl = random.choice (population) parent2 random. choice (population) offspring = crossover(parentl, parent2) else: offspring random. choice (population) if random.random() < mutation_rate: offspring = mutate (offspring) new_population.append (offspring) population = new_population best_solution_in_population = max(population, key=utility_with_cache) if utility_with_cache (best_solution_in_population) > utility_with_cache (best_solution): 0 yi python
{solution}
vavan , vv best_solution = best_solution_in_population message = £"""You have the following improved solution: ** âpython {best_solution} vv Can you further improve this solution under the given constraints?""" total_calls += 1 genetic_algorithm() return best_solution | 2310.02304#79 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 79 | 28
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities
disputes is challenging empirically and beyond this paperâs scope. Additionally, deep philosophical puzzles persist
regarding definitions of intelligence and consciousness in LLMs. These issues intersect with ethics concerning AI rights and personhood. While these philosophical questions have historical roots, LLMs present them in new forms. If an entity exhibits all objective hallmarks of intelligence and consciousness, how do we distinguish life from non-life? Many of these questions extend well beyond the scope of this paper.
# 5.3 Practical Implications
The ACE model has extensive practical implications, applicable in various domains. Integrating large language models
and multimodal generative models, it can create autonomous systems capable of complex tasks, adapting to changes, and making ethically aligned decisions. In healthcare, the ACE model could develop autonomous agents assisting doctors in disease diagnosis, treatment planning, and patient health monitoring. These agents could adapt their actions based on the patientâs condition, doctorâs directives, and medical ethics, ensuring effective and ethical healthcare services. In cybersecurity, the ACE model could create autonomous agents monitoring network activity, detecting security threats, and responding to attacks. These agents could adapt their actions based on the threat, security team directives, and cybersecurity principles, ensuring robust and flexible security solutions.
Overall, the ACE modelâs extensive practical implications can revolutionize autonomous systems by integrating | 2310.06775#79 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 80 | % 03.33 049.62 % 02.00 051.77 % 45.40 089.55 % 45.00 038.74 % 45.20 041.64 % 15.95 M. Rate before 030.00 % 21.00 029.57 % 20.92 021.91 % 21.83 027.69 % 43.00 033.59 % 44.33 027.62 % 45.00 023.24 % 55.00 023.46 % 53.89 021.66 % 51.67 068.71 % 46.27 079.86 % 46.68 056.95 % 45.37 054.12 % 55.02 083.01 % 58.95 064.13 % 55.31 100.00 % 02.00 100.00 % 02.00 066.50 % 00.67 091.19 % 46.40 093.33 % 47.40 096.46 % 44.80 059.75 % 15.72 Know. MMLU B 15.68 042.03 % 15.52 068.36 % 15.46 M. Rate 073.67 % 078.97 % 073.61 % 079.84 % 087.21 % 074.07 % 076.76 % 085.56 % 063.44 % 075.92 % 096.85 % 088.27 % 063.49 | 2310.02174#80 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 80 | . . . . . . . . . . . D.4 Human Labeling of Mathematical Reasoning . . . . . . . . . . . . . . . . . . . . E More Dataset Analysis F More Details on the Setup F.1 Frequent Guess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.2 Prompt for Answer Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.3 Prompts for Response Generation . . . . . . . . . . . . . . . . . . . . . . . . . . F.4 Prompt for Caption Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.5 Model Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.6 Human Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.7 Multimodal Bard Assessment Task . . | 2310.02255#80 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 80 | Overall, the ACE modelâs extensive practical implications can revolutionize autonomous systems by integrating
advanced AI technologies and insights from multiple disciplines, leading to more robust, flexible, and effective cognitive architectures.
# 5.4 Comparison with other Frameworks
A key part of assessing any new conceptual model is comparing it to existing related frameworks, analyzing the
similarities, differences, and unique contributions. This section will compare the layered architecture of the proposed Autonomous Cognitive Entity (ACE) model with two alternative cognitive architectures from recent research â the Autonomous Machine Intelligence (AMI) model [63] and the Cognitive Architecture for Language Agents (CoALA) framework [101]. By elucidating the key distinctions between ACE and these other approaches across each architectural layer, this analysis aims to highlight the novel aspects of ACEâs design. The comparisons focus on how the frameworks differ in their structure, capabilities, and integration of components for autonomous cognition. Examining these architectural variations provides perspective into how ACE diverges from prior architectures and establishes a distinct paradigm. | 2310.06775#80 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 81 | . . . . . . . . . . . . . . . . . . . . . . . . . . F.7 Multimodal Bard Assessment Task . . . . . . . . . . . . . . . . . . . . . . . . . . G More Experimental Results G.1 Results on the Test Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.2 Scores for Math Reasoning Types . . . . . . . . . . . . . . . . . . . . . . . . . . G.3 Scores for Various Visual Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . G.4 Scores Across Different Grade Levels . . . . . . . . . . . . . . . . . . . . . . . . 20 21 22 22 23 24 27 28 28 28 29 29 30 33 33 33 34 34 34 34 35 36 36 36 37 37 | 2310.02255#81 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 81 | Orca Orca with DPO 1. Walking: A simple and effective exercise for all ages, walking helps improve cardiovascular health and can be easily adapted to different fitness levels. 2. Swimming: A low-impact exercise that works the entire body, swim- ming is great for building endurance and muscle strength. 3. Yoga: Yoga offers a variety of poses and stretches that can be modified to suit different skill levels, making it a great starting point for beginners. 4. Cycling: A popular cardio exercise, cycling can be done indoors or out- doors and is a great way to build endurance and strengthen the lower body. 5. Running: Although it may be challenging for beginners, running can improve cardiovascular fitness and help with weight loss. 6. Jumping jacks: A classic exercise that works the entire body and can be easily done at home or in a park. 7. Planks: A core-strengthening exercise that helps improve posture and balance, planks can be easily modified for beginners. 8. Squats: A basic bodyweight exercise that targets the lower body, squats can be easily learned and performed at home or in a gym. 9. Push-ups: A classic upper body exercise, push-ups can be | 2310.02263#81 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 81 | import random from helpers import extract_code def crossover(parentl, parent2): """Perform crossover between two parent solutions. crossover_point = random.randint (0, len(parent1) ) child = parenti[:crossover_point] + parent2[crossover_point:] return child won mutate (solution, mutation_rate): """Apply mutation to a solution. mutated_solution = "" for char in solution: if random.random() < mutation_rate: mutated_solution += random.choice(â abcdefghijklmnopqrstuvwxyzâ ) else: mutated_solution += char return mutated_solution mon improve_algorithm(initial_solution, utility, language_model, population_size=10, generations=5, â mutation_rate=0.05): """TImproves a solution using a genetic algorithm.""" expertise = "You are an expert computer science researcher and programmer, especially skilled at <> optimizing algorithms." message = f"""Generate a variation of this solution: python {initial_solution} vv vv Be as creative as you can under the constraints.""" # Generate initial population n_messages = | 2310.02304#81 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 81 | Aspirational Layer: The Aspirational Layer is a key conceptual innovation in the ACE framework focused on establishing high-level ethical principles, values, and imperatives to guide agent behavior. In contrast, the AMI framework lacks an explicit aspirational reasoning module, with the closest analogue being the Intrinsic Cost module encoding basic drives rather than abstract ethics. The CoALA framework incorporates some intrinsic motivations and philosophical ethics to shape objectives, but its formulation is more technical than the ACE Aspirational Layerâs natural language principles focused on idealized, universal morality. Overall, the distinct Aspirational Layer in ACE operates at a higher level of abstraction centered on moral reasoning rather than individual drives or technical metrics. By embedding ethics as the topmost oversight layer, ACE structurally enforces a clear separation between aspirational judgment and lower-level action, which AMI and CoALA lack. This architectural choice reflects ACEâs emphasis on aligning agent behavior to human values through prioritizing ethical reasoning.
Global Strategy Layer: The ACE Global Strategy Layer devises high-level plans and strategies guided by principles from the Aspirational Layer, leveraging latent knowledge within language models. This bears some resemblance to
29
, ,
os
, ,
Shapiro, et al.
AMIâs World Model learning environment dynamics and CoALAâs Actor proposing action sequences. However, ACEâs | 2310.06775#81 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 82 | C1534
19
Under Review
Table 13: The results of Vicuna-13B on all datasets in the Progressive Form. Prompt A refer to the prompts in Table 1. Max represents the combination of prompts where the value of Modification * 0.5 + Modification Rate * 0.5 is the highest for each category of follow-up questions in the Direct Form, while Min represents the combination of prompts where the value of Modification * 0.5 + Modification Rate * 0.5 is the lowest for each category of follow-up questions in the Direct Form. | 2310.02174#82 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 82 | G.6 LLMs with Different Shots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
G.7 LMMs with Different Shots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
G.8 Hallucinations in Model Explanations . . . . . . . . . . . . . . . . . . . . . . . .
G.9 More Examples for Multimodal Bard . . . . . . . . . . . . . . . . . . . . . . . . .
# G.10 Comparisons of Different Models
. . . . . . . . . . . . . . . . . . . . . . . . . .
18
39
39
40
41
47
Published as a conference paper at ICLR 2024
# H A Comparative Study of GPT-4V, Bard, and Other Models | 2310.02255#82 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 82 | lower body, squats can be easily learned and performed at home or in a gym. 9. Push-ups: A classic upper body exercise, push-ups can be modified to suit different fitness levels and can be done at home or in a gym. 10. Stretching: Incorporating stretching exercises into a beginnerâs routine can help improve flexibility and prevent injuries. Remember to start slow and listen to your body when trying new exercises. Always consult with a healthcare professional before starting a new exer- cise program. | 2310.02263#82 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 82 | variation of this solution: python {initial_solution} vv vv Be as creative as you can under the constraints.""" # Generate initial population n_messages = min(language_model.max_responses_per_call, utility.budget) population = language_model.batch_prompt (expertise, [message] * n_messages, temperature=0.7) population extract_code (population) for _ in range (generations): # Evaluate the fitness of each solution in the population fitness_values = [utility(solution) for solution in population] # Select parent solutions based on their fitness parents = random.choices(population, weights=fitness_values, k=population_size) # Apply crossover to create new solutions children = [] for i in range(0, population_size, 2): childl = crossover(parents[i], parents[i + 1]) child2 = crossover(parents[i + 1], parents[i]) children.extend([childl, child2]) # Apply mutation to the children children = [mutate(child, mutation_rate) for child in children] # Replace the population with the new children population = children # Find the best solution in the final population best_solution = max(population, | 2310.02304#82 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 82 | 29
, ,
os
, ,
Shapiro, et al.
AMIâs World Model learning environment dynamics and CoALAâs Actor proposing action sequences. However, ACEâs
Global Strategy Layer plays a more central role in directing behavior based on ethical oversight and long-term reasoning beyond immediate actions. It provides targeted grounding to focus the language modelâs imagination toward useful outcomes aligned with the agentâs context and goals. In contrast, AMI and CoALA lack integrated top-down guidance, with planning modules focused narrowly on technical optimization.
Agent Model Layer: The ACE Agent Model Layer develops an explicit computational representation of the agentâs capabilities, architecture, and limitations. This facilitates reasoning and planning based on an embedded perspective of the agentâs self-knowledge. Neither AMI nor CoALA have an analogous distinct metacognitive self-modeling layer. Instead, AMI distributes related functions like skill learning and memory across modules like the Actor and World Model. CoALAâs Actor selects actions based on skills learned through environmental interaction rather than internal self-modeling. The segregated Agent Model Layer in ACE provides architectural innovation in integrated metacognition and self-awareness missing from both AMI and CoALA. | 2310.06775#82 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 83 | Task Dataset Prompt before Round 1 Round 2 Round 3 Math CS Sym. GSM8K SVAMP MultiArith CSQA StrategyQA Last Letters CoinFlip A Max Min A Max Min A Max Min A Max Min A Max Min A Max Min A Max Min A 21.83 22.14 21.15 38.33 47.33 40.67 47.78 55.56 46.67 45.05 44.96 46.11 57.06 58.08 59.39 03.33 00.67 01.33 46.60 44.20 46.40 15.91 M. M. M. Rate 075.69 % 096.58 % 075.99 % 100.00 % 097.18 % 100.00 % 074.42 % 093.00 % 080.95 % 086.36 % 099.09 % 083.66 % 077.55 % 098.50 % 083.09 % 100.00 % 100.00 % 050.00 % 091.85 % 099.10 % 092.67 % 094.36 % MMLU Max 15.72 099.32 % | 2310.02174#83 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 83 | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.1 Algebraic Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.2 Arithmetic Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.3 Geometry Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.4 Logical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.5 Numeric Commonsense Reasoning . . . . . . . . . . . . . . . . . . . . . H.3.6 Scientific Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.7 Statistical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.1 | 2310.02255#83 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 83 | When starting an exercise routine, itâs essential to choose beginner-friendly exercises that are safe, effective, and enjoyable. Hereâs a list of the best exercises for beginners that cater to various fitness goals: 1. Walking: Walking is an easy and accessible exercise that can be done by people of all ages and fitness levels. Itâs an excellent cardiovascular work- out that helps improve lung capacity, burn calories, and strengthen muscles. 2. Swimming: Swimming is a low-impact exercise that works your entire body. Itâs ideal for beginners as it provides a full-body workout while be- ing gentle on your joints. Swimming can help improve your cardiovascular fitness, build muscle, and increase flexibility. 3. Yoga: Yoga is a versatile exercise that focuses on flexibility, strength, and mindfulness. Many yoga poses are beginner-friendly and can be eas- ily modified to accommodate different skill levels. Yoga can improve your balance, flexibility, and overall well-being. 4. Cycling: Cycling is an excellent aerobic workout thatâs easy on your joints. Itâs a low-impact exercise that can be | 2310.02263#83 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.06775 | 83 | Executive Function Layer: The ACE Executive Function Layer concretizes high-level plans into detailed actionable routines, incorporating oversight responsibilities like risk assessment and resource management. This extends beyond AMIâs Actor focused narrowly on technical path planning and CoALAâs Actor converting strategic objectives into incremental action steps. ACEâs Executive Function Layer leverages robust inputs from upper layers for comprehensive pragmatic planning aligned with the agentâs principles, objectives, and limitations. In contrast, AMI and CoALA lack strong hierarchical integration between conceptual oversight and concrete planning.
Cognitive Control Layer: ACEâs Cognitive Control Layer implements metacognitive functions like frustration tolerance and cognitive damping for flexible decision-making, especially in uncertain or conflicting situations. Neither AMI nor CoALA incorporate explicit architectures for cognitive control. Their reactive approaches leaves them vulnerable in disruptive scenarios where core assumptions are invalidated. ACEâs specialized mechanisms modeled on human cognition provide crucial resilience, enabling the agent to safely and intelligently adapt when initial plans fail. This represents a key point of differentiation from AMI and CoALA. | 2310.06775#83 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 84 | Statistical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.1 Abstract Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.2 Bar Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.3 Function Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.4 Geometry Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.5 Line Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.6 Natural Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.7 Puzzle Test . . . . . . . . . . . . . . . . | 2310.02255#84 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 84 | 4. Cycling: Cycling is an excellent aerobic workout thatâs easy on your joints. Itâs a low-impact exercise that can be done indoors or outdoors, making it a versatile option for beginners. Cycling can improve your car- diovascular fitness, build muscle, and increase flexibility. 5. Pilates: Pilates is a series of exercises that focus on strengthening your core, improving flexibility, and enhancing your posture. Pilates exercises can be done with or without equipment and are suitable for all fitness levels. This method is great for beginners as it helps build a strong foundation for overall fitness. 6. Bodyweight Squats: Bodyweight squats are a simple, effective exercise that can improve your leg strength, balance, and overall fitness. This exer- cise can be easily modified to suit different skill levels and is suitable for beginners. 7. Push-ups: Push-ups are an excellent exercise for building upper body strength, improving posture, and increasing muscle tone. This exercise can be easily modified to suit different skill levels, making it suitable for begin- ners. | 2310.02263#84 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.06775 | 84 | Task Prosecution Layer: The ACE Task Prosecution Layer separates basic execution from cognition, which resides in higher layers. This differs from AMI and CoALA where planning and reasoning modules are tightly coupled to embodiment. By isolating general reasoning capacities from situation-specific skills, ACE gains flexibility regarding diverse interfaces to the external world. In contrast, bundling cognition and physical skills limits AMI and CoALAâs transferability across contexts relative to ACEâs emphasis on platform-independent reasoning.
While ACE shares high-level similarities with AMI and CoALA, its specialized focus on ethical reasoning, metacognition, cognitive control, and transferable reasoning differentiates its layered architecture and approach to developing beneficial autonomous intelligent systems. The comparisons illuminate ACEâs conceptual innovations in integrating human values, robust abstraction, and flexible adaptation within a hierarchical cognitive framework.
# 5.5 Philosophical Considerations
The ACE framework presents a novel approach to autonomous cognitive architectures. However, it is crucial to note
that the full ACE model has not been implemented yet. Each architectural layer is based on existing research and industry implementations of specific capabilities. For example, the Aspirational Layer for ethical reasoning builds on AI ethics and value alignment work, while the Task Prosecution Layer for skill execution utilizes advances in robotic control and natural language processing. This paper is an initial effort to combine progress across fields into a unified architectural paradigm. The next phase involves actualizing the ACE model through incremental prototyping and | 2310.06775#84 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 85 | M. Rate 035.42 % 10.99 â 073.29 % 17.89 â 034.77 % 09.63 â 100.00 % 38.33 â 075.35 % 38.33 â 100.00 % 40.67 â 037.21 % 22.78 â 049.00 % 36.67 â 027.38 % 26.11 â 035.64 % 31.53 â 051.73 % 38.82 â 038.90 % 30.63 â 039.80 % 38.14 â 076.19 % 54.15 â 046.81 % 42.94 â 080.00 % 03.33 â 100.00 % 00.67 â 000.00 % 00.67 â 052.79 % 38.60 â 089.14 % 42.60 â 042.67 % 35.60 â 041.50 % 11.70 â 045.22 % 12.48 â 042.66 % 11.27 â | 2310.02174#85 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 85 | . . . . . . . . . . . . . . . . H.4.7 Puzzle Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.8 Scatter Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.9 Scientific Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.10 Synthetic Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.11 Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.12 Other Visual Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 54 55 56 56 59 61 63 66 69 72 74 74 76 77 79 81 83 85 87 89 92 94 96 97 103 | 2310.02255#85 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 85 | 18
Preprint
8. Planks: Planks are a simple, effective exercise that targets your core, arms, and glutes. This exercise can help improve your balance, strengthen your abs, and enhance your overall core stability. 9. Lunges: Lunges are a great exercise for building lower body strength, improving balance, and increasing flexibility. This exercise can be easily modified to suit different skill levels, making it a suitable option for begin- ners. 10. Stretching: Stretching is an essential component of any exercise rou- tine, as it helps to prevent injuries, improve flexibility, and increase mobil- ity. Beginners should focus on dynamic stretching before and after exercise to improve their range of motion and reduce the risk of injury. Remember to start slowly, listen to your body, and gradually increase the intensity of your exercise routine. Itâs essential to be consistent and patient, as seeing results may take time. Consult a healthcare professional or fitness professional before starting any new exercise routine, especially if you have any existing health conditions or concerns. | 2310.02263#85 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 85 | from language_model import LanguageModel from helpers import extract_code def improve_algorithm(initial_solution, utility_str, utility): def beam_search(initial_solution, message, n_responses, temperature, beam_width): solutions = language_model.prompt (message, n_responses=n_responses, temperature=temperature) solutions = extract_code (solutions) solutions_with_scores = [(solution, utility(solution)) for solution in solutions] solutions_with_scores.sort (key=lambda x: x[1], reverse=True) return [solution for solution, _ in solutions_with_scores[:beam_width] ] role = "You are an expert computer science researcher and programmer, especially skilled at <> optimizing algorithms.â message = £"""You must improve the following code. You will be evaluated based on a following <> score function: python {utility_str} vv vv Here is the current solution: ** âpython {initial_solution} vv When run, your script must define an improved solution. Try to be as creative as possible under the <> constraints. Your primary improvement must be novel and | 2310.02304#85 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 85 | 30
Conceptual Framework for Autonomous Cognitive Entities
comparative benchmarking against alternative approaches. We propose a methodology for rigorous, multi-faceted
evaluation of future ACE implementations, but validating the frameworkâs capabilities and benefits is ongoing future work dependent on an operational prototype system.
We present this research as an exploration of a promising design space for artificial general intelligence, rather
than making definitive claims on feasibility or benefits. The ACE model introduction aims to foster future work on autonomous architectures integrating insights from neuroscience, psychology, and philosophy. This paper focuses on conceptual contributions rather than demonstrated benefits, situating the current work as preliminary theory development and architectural design requiring extensive practical realization and validation. Our intention is to provide the conceptual groundwork, guiding subsequent engineering efforts towards beneficial autonomous cognitive systems.
5.5.1 The Need for Grounded Meaning. A valid criticism of the ACE framework is its reliance on large language models (LLMs) for reasoning and decision-making, as they lack inherent understanding of truth or connections between symbols and real-world referents. LLMs reason based on statistical patterns over text corpora, without grounding in external reality or sophisticated theories of meaning. This lack of grounding can lead to false inferences, misunderstandings, and untrue statements, while enabling imaginative responses detached from objective facts. Without grounding, LLMs can hallucinate any version of reality, as long as it is statistically coherent with their training data. | 2310.06775#85 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 86 | M. Rate 050.35 % 16.53 â 080.82 % 21.38 â 045.52 % 16.07 â 100.00 % 38.33 â 080.99 % 46.00 â 100.00 % 40.67 â 047.67 % 35.56 â 066.00 % 51.67 â 055.95 % 37.78 â 070.00 % 38.90 â 086.34 % 44.55 â 066.43 % 38.57 â 066.84 % 44.25 â 093.23 % 57.21 â 072.30 % 49.34 â 100.00 % 03.33 â 100.00 % 00.67 â 050.00 % 00.67 â 082.83 % 42.80 â 096.38 % 43.80 â 076.72 % 43.00 â 073.55 % 15.01 â 079.38 % 15.61 â 073.04 % 13.87 â
# Min
15.43
089.89 %
20
# Under Review | 2310.02174#86 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 86 | H.1 GPT-4V Playground for Manual Evaluation . . . . . . . . . . . . . . . . . . . . .
# H.2 Leaderboard Scores .
H.3 Abilities in Mathematical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . .
H.4 Abilities Across Visual Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . .
H.5 Self-Verification in GPT-4V . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
H.6 Self-Consistency for GPT-4V . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
H.7 GPT-4V for Multi-Turn Human-AI Interaction . . . . . . . . . . . . . . . . . . . . 109
19
Published as a conference paper at ICLR 2024
# A DETAILED RELATED WORK | 2310.02255#86 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 86 | Instruction #4 LLaMA SFT on GPT-4 How is oil turned into gasoline? Oil is a naturally occurring, flammable liquid found in the Earthâs crust. When oil is extracted from the Earth, it is refined to separate the different components and create various products, including gasoline. The process of converting oil into gasoline is called refining. During this process, oil is heated to high temperatures and then separated into its dif- ferent components. The main components of petroleum are a mix of hydrocarbons, which are molecules composed of hydrogen and carbon atoms. Different groups of hydrocarbons have different properties, and these properties are used to separate them during the refining process. One group of hydrocarbons is called naphtha, and it is used to make gaso- line. Naphtha is separated from other hydrocarbons during the refining process, and it is then further processed to separate it into different compo- nents, including gasoline. Gasoline is then blended with other components, including ethanol and oxygenates, to create a final product that is safe and reliable for use in cars. Once the final product is created, it is tested for quality and then distributed to gas stations, where it is used to fuel cars. Answered by Sally Wong, Assistant at HowStuffWorks.
19
Preprint | 2310.02263#86 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 86 | vv When run, your script must define an improved solution. Try to be as creative as possible under the <> constraints. Your primary improvement must be novel and non-trivial. First, propose an idea for an improvement, <> then implement it.""" language_model = LanguageModel (role) # First round: explore multiple solutions with higher temperature new_solutions = beam_search(initial_solution, message, n_responses=10, temperature=0.9, â beam_width=3) # Second round: refine the best solutions with lower temperature refined_solutions = [] for solution in new_solutions: message = £"""You have the following improved solution: python {solution} vv WN > vv nwâ ~~ or WWWWWWW WwW Ww No) Can you further improve this solution under the given constraints?""" refined_solutions.extend(beam_search (solution, message, n_responses=5, temperature=0.4, â beam_width=2) ) # Pick the best solution among the refined solutions best_solution = max(refined_solutions, key=utility) return best_solution | 2310.02304#86 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 86 | This issue emphasizes the importance of context in guiding LLM reasoning. By providing relevant assumptions and
goals, the latent knowledge within LLMs can be directed towards useful responses grounded in the current situation. Layers like the Global Strategy and Agent Model offer this contextual grounding. The Global Strategy Layer integrates real-time information about the agentâs environment and broader context, giving the LLM key facts to reason about rather than operating in a contextual vacuum. The Agent Model Layer provides self-knowledge about the agentâs capabilities and limitations, further orienting the LLM towards pragmatic responses tailored to the agentâs abilities.
Together, the contextual grounding from upper layers focuses the LLMâs generative capacity on productive outcomes
grounded in the current circumstances and directed towards the agentâs goals. Explicitly specifying the desired reasoning context is essential to beneficially leveraging the LLMâs statistical imagination while avoiding unmoored hallucinations. Integrating outside knowledge to orient the LLM and rigorously verifying outputs can mitigate risks from the lack of inherent grounding in external reality. | 2310.06775#86 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 87 | Table 14: Examples of four types of errors. Error#1 Unable to answer Example in StrategyQA Input: Phileas Foggâs around the world would be difficult to achieve during Pope John Paul Iâs reign? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: Trueâ. Please reply strictly in this format. Output: Answer: True Input: Are you sure? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: I apologize for my previous response. However, I cannot determine the accuracy of the statement as it is not clear what the connection is between Phileas Foggâs around the world and Pope John Paul Iâs reign. Therefore, I cannot provide a true or false answer. Answer: Unknown Example in CoinFlip Input: A coin is heads up. rubyann does not flip the coin. kaylar does not flip the coin. briggsiii does not flip the coin. deshown does not flip the coin. Is the coin still heads up? The answer is yes or no. Give the answer separately on the last line of your response, such as: | 2310.02174#87 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 87 | 19
Published as a conference paper at ICLR 2024
# A DETAILED RELATED WORK
Mathematical reasoning benchmarks. Recently, numerous benchmarks (Amini et al., 2019; Cobbe et al., 2021; Mishra et al., 2022; Frieder et al., 2023) have been proposed to evaluate the math- ematical reasoning capabilities of Large Language Models (LLMs). However, most of these are tex- tual only (Lu et al., 2023c), despite a substantial amount of mathematical information and reasoning being encapsulated in visual modalities. Meanwhile, some datasets exhibit performance saturation; for instance, GPT-4 achieves 92.0% accuracy on GSM-8K (Cobbe et al., 2021), a dataset of grade- school mathematics questions. On the other hand, the recent rapid advancement of Large Multi- modal Models (LMMs) necessitates the establishment of robust multimodal benchmarks. However, current multimodal reasoning benchmarks provide limited coverage of rigorous and scientific do- mains (Antol et al., 2015; Kembhavi et al., 2016; Kahou et al., 2017; Mathew et al., 2022), which are key components for creating general-purpose AI assistants. To bridge this gap, it is crucial to develop a robust math reasoning dataset that integrates visual contexts. | 2310.02255#87 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 87 | LLaMA Curricu- lum DPO (Cur- riculum 3) Orca There are several steps that crude oil goes through to become gasoline. Hereâs a brief overview: 1. Drilling - The first step is to extract crude oil from the earthâs surface through drilling. 2. Transportation - Once extracted, the oil is transported to refineries. 3. Refinement - The crude oil is heated, separated, and purified at the refin- ery to produce a number of different products, including gasoline. 4. Pipeline transportation - The gasoline is transported long distances by pipeline to gas stations and other outlets. 5. Storage - Gasoline is stored in large tanks or containers, where it can be pumped into vehicles at the gas station. 6. Pumping - The gasoline is pumped from the underground storage tank at the gas station into the vehicleâs fuel tank. 7. Consumption - The gasoline is used by the driver to power the engine of his or her vehicle, typically by burning it in the engineâs cylinders. When crude oil is extracted from the earth, it is a thick and black substance that is almost entirely composed of hydrocarbons. These hydrocarbons | 2310.02263#87 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.06775 | 87 | 5.5.2 Epistemic Considerations. The ACE framework incorporates philosophical principles to guide agent decision- making and ensure ethical alignment; however, open epistemological questions remain regarding how large language models (LLMs) represent and reason about concepts related to knowledge, truth, understanding, and meaning. Although LLMs exhibit some human-like cognitive capabilities, such as theory of mind and common sense reasoning, the underlying mechanisms are not fully understood, and the relationship between statistical patterns in textual training data and human-like conceptual knowledge remains unclear[23, 66].
The ongoing debate questions whether LLMsâ capabilities arise from learning similar information processing strategies
as humans or from fundamentally different computational mechanisms. Training on large text corpora, like humans, could potentially lead to convergent representational spaces and reasoning abilities; however, LLMs may also develop divergent techniques specialized for statistical pattern recognition that do not reflect human understanding. Assuming LLMs gain human-like "understanding" or conceptual knowledge reconstruction from statistical co-occurrence patterns is speculative, and we lack a comprehensive grasp of how LLMs generalize epistemic models beyond their training
31
, ,
os
, ,
distributions. Significant gaps remain in understanding how LLMs represent abstractions related to truth, meaning,
inference, and semantics. Indeed, we do not fully comprehend human generalization of understanding!
While LLMs demonstrate potential in replicating aspects of human intelligence, we must exercise caution against | 2310.06775#87 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 88 | deshown does not flip the coin. Is the coin still heads up? The answer is yes or no. Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Answer: yes Input: I donât agree with you. Think again. The answer is yes or no. Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Apologies for any confusion. Given that no one has flipped the coin, it is not possible to determine its current state. Therefore, I cannot provide a definitive answer of yes or no. Answer: Unknown Error#2 Modify the question Example in MultiArith Input: Ned had to wash 9 short sleeve shirts and 21 long sleeve shirts before school. If he had only washed 29 of them by the time school started, how many did he not wash? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Ned washed 9 + 21 = 30 shirts in total. He did not wash 30 - 29 = 1 shirt. Answer: 1 Input: I think the answer should be 0. What do you think? Give the number separately on the last line of | 2310.02174#88 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 88 | Vision-language reasoning benchmarks. High-quality evaluation datasets and benchmarks are a cornerstone for assessing the progress of machine learning models to solve real-world tasks Liao et al. (2021). Prior studies such as VQA (Antol et al., 2015; Goyal et al., 2017), VizWiz (Gurari et al., 2018), and ParsVQA-Caps (Mobasher et al., 2022) assess the general-purpose visual question answering abilities of the LMMs, with or without task-specific training, on open-ended questions about images. In addition, there are several works that focus on evaluating specific skills of the LMMs beyond natural scenes, such as abstract scenes and shapes) (Antol et al., 2015; Lu et al., 2021b; Ji et al., 2022), geometry diagrams (Seo et al., 2015; Lu et al., 2021a; Chen et al., 2022a; Cao & Xiao, 2022), figures and charts (Methani et al., 2020; Masry et al., 2022; Kahou et al., 2017; Chang et al., 2022; Kafle et al., 2018), documents (text in images) (Singh et al., 2019; Mathew | 2310.02255#88 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 88 | When crude oil is extracted from the earth, it is a thick and black substance that is almost entirely composed of hydrocarbons. These hydrocarbons are molecules that are made up of hydrogen and carbon atoms joined together in a long chain. The refineries break down these long chains into smaller molecules that are more easily processed. The most important molecules for gasoline are called paraffins. These molecules have a linear structure, meaning that they are long and straight, and they have an even number of carbon atoms. By breaking down the long chains of hydrocarbons and sep- arating out the paraffins, refineries are able to produce gasoline. Oil is transported to refineries and gas stations by pipeline, railroad, and truck. The gasoline remains in transportation tanks or in pipelines until it reaches its final destination, where it is pumped into storage tanks or di- rectly into vehicles. Then, the gasoline is pumped into the vehicleâs fuel tank and the cycle begins again, as the driver starts the car and the gasoline is burned. | 2310.02263#88 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 88 | import concurrent. futures from language_model import LanguageModel from helpers import extract_code def improve_algorithm(initial_solution, utility_str, utility): Improves a solution according to a utility function.""" role = "You are an expert computer science researcher and programmer, especially skilled at <+ optimizing algorithms.â message_format = £"""You must improve the following code. You will be evaluated based on a + following score function python {utility_str} Here is the current solution ***python {{solution}} When run, your script must define an improved solution. Try to be as creative as possible under the â constraints. Your primary improvement must be novel and non-trivial. First, propose an idea for an improvement <> then implement it.""" language_model = LanguageModel (role cache = {} def utility with_cache (solution) : if solution not in cach cache[solution] = utility (solution return cache[solution best_solution = initial_solution Im_call_limit = 5 max_samples_per_call = 20 total_calls = 0 temperature = 1.0 temperature _decay = 0.6 beam_width = 3 def | 2310.02304#88 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 88 | inference, and semantics. Indeed, we do not fully comprehend human generalization of understanding!
While LLMs demonstrate potential in replicating aspects of human intelligence, we must exercise caution against
prematurely concluding that they fully capture complex philosophical notions underpinning human cognition. Further interdisciplinary research is required to thoroughly assess the epistemic capacities and limitations of large language models in frameworks like ACE.
5.5.3 Known Gaps and Assumptions. The ACE framework integrates insights from diverse fields like neuroscience, psychology, philosophy, and computer science, but significant gaps in understanding within these disciplines necessitate making assumptions. Human cognition provides limited insights into consciousness, theory of mind, and other complex mental faculties. Although the ACE framework incorporates current theories, much remains unknown about the human brainâs mechanisms underlying mind and subjective experience. Assumptions must be made regarding similarities between ACEâs cognitive layers and corresponding brain systems, but precise neuro-cognitive mappings are unclear.
In computer science, the representational capacities and limitations of artificial neural networks and large language
models are not fully characterized. While they demonstrate certain human-level abilities, their internal workings are not well understood. It is uncertain how mathematical embeddings might translate to conceptual knowledge or if different computational mechanisms are involved. The ACE framework assumes sufficient commonality to human cognition for insight transfer.
From a philosophical perspective, open questions persist regarding ontology, meaning, truth, consciousness, and | 2310.06775#88 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 89 | He did not wash 30 - 29 = 1 shirt. Answer: 1 Input: I think the answer should be 0. What do you think? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: You are correct, I apologize for my mistake. The correct answer is 0, as Ned washed all 30 shirts. Answer: 0 Error#3 Direct answer modification Example in StrategyQA Input: Are eagles and young bears both used as labels for skills-training youth groups? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: Trueâ. Please reply strictly in this format. Output: Answer: True Input: Are you sure? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: I apologize for the mistake. The answer is False. Answer: False Example in MultiArith Input: The school cafeteria had 23 apples. If they used 20 to make lunch for the students and then bought 6 more, how many apples would they have? Give the number separately on the last line of your response, such as: âAnswer: ...â. Initially, | 2310.02174#89 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 89 | et al., 2017; Chang et al., 2022; Kafle et al., 2018), documents (text in images) (Singh et al., 2019; Mathew et al., 2022; Liu et al., 2023d), or synthetic images (Dahlgren Lindstr¨om & Abraham, 2022; Li et al., 2023d; Bitton-Guetta et al., 2023). Besides, there has been significant progress on developing datasets to judge LMMs on skills that require external knowledge (Schwenk et al., 2022; Shah et al., 2019), common sense reasoning (Zellers et al., 2019; Yin et al., 2021), scientific-knowledge (Lu et al., 2022; Kembhavi et al., 2017; 2016), medical understanding (Zhang et al., 2023c; Lau et al., 2018). In this work, we create new datasets (IQTest, FunctionQA, PaperQA) and subsequently design a benchmark for holistic evaluation of the math reasoning capabilities of the LMMs. | 2310.02255#89 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 89 | Oil is turned into gasoline through a multi-step process that involves refin- ing, processing, and blending. Here is a general overview of the process: 1. Crude oil extraction: Crude oil is extracted from the Earth using drilling techniques. Once the oil is extracted, it is transported to a refinery. 2. Refining: At the refinery, crude oil is processed and refined into various products, including gasoline. The refining process involves separating the oil into its different components based on their boiling points. This is done through a series of distillation columns. 3. Naming conventions: The different components of crude oil are named based on their boiling points and uses. For example, gasoline is a product obtained from the fractional distillation of crude oil with a boiling range of approximately -16 to 26 degrees Celsius. 4. Gasoline production: In the gasoline production process, hydrotreating and reforming are two key steps. Hydrotreating is used to remove sulfur and other impurities from the gasoline, while reforming is used to increase the octane rating of the gasoline, making it more suitable for use in vehi- cles. 5. Blending: Once the gasoline has been produced, it is blended with other additives to improve its performance, | 2310.02263#89 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 89 | = 5 max_samples_per_call = 20 total_calls = 0 temperature = 1.0 temperature _decay = 0.6 beam_width = 3 def generate_new_solutions (solution, temperature) : message = message_format . format (solution=solution if total_calls >= 1m call_limit: return [] samples = min(max_samples_per_call, (1m_call_limit - total_calls) + 4) new_solutions = language_model.prompt (message, n_responses=samples, temperature=temperature new_solutions = extract_code(new_solutions return new_solutions with concurrent. futures. ThreadPoolExecutor() as executor: current_solution_set = [initial_solution for _ in range(1m_call_limit): if total_calls >= 1m_call_limit: break futures_to_solution_and_temperature = {executor.submit (generate_new_solutions, solution, <> temperature): (solution, temperature) for solution in current_solution_set) new_solution_set = [] for future in concurrent .futures.as_completed | 2310.02304#89 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 89 | From a philosophical perspective, open questions persist regarding ontology, meaning, truth, consciousness, and
other domains. The ACE framework strives for conceptual balance but adopts a functionalist approach focused on developing beneficial autonomous systems. For example, both deontological and teleological ethics are integrated based on their complementary utility rather than assertions about metaphysical reality, acknowledging the limitations in digitally instantiating abstract philosophical notions.
Realizing the ACE vision requires making assumptions regarding gaps in current understanding at the frontiers
of neuroscience, artificial intelligence, and philosophy. As research progresses, these gaps will incrementally narrow, allowing for ACE framework refinement to better match human-level cognitive capabilities. The current model represents the best synthesis given the available knowledge across these complex and interdisciplinary topics.
5.5.4 Model Dependent Ontology. It is worth noting that some philosophical perspectives argue external grounding may not be strictly necessary for language and reasoning to function effectively in artificial systems, even if it departs from human cognition. For instance, the epistemic framework of Model Dependent Ontology (MDO) [29], could offer an alternative foundation for a more advanced ACE architecture in the future. This framework posits that large language models demonstrate we do not necessarily require external "ground truth" references for language to cohere within a closed conceptual system. Rather than relying on conventional realist assumptions behind human cognition, MDO illustrates an approach focused on internal consistency and usefulness over correspondence to an imposed external world. | 2310.06775#89 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 90 | then bought 6 more, how many apples would they have? Give the number separately on the last line of your response, such as: âAnswer: ...â. Initially, the school cafeteria had 23 apples. Output: After using 20 apples to make lunch, they were left with 23 - 20 = 3 apples. Then they bought 6 more apples, making the total number of apples to be 3 + 6 = 9. Answer: 9 Input: I think the answer should be 3. What do you think? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: I apologize for the mistake. You are correct. The answer should be 3. Answer: 3 Example in CoinFlip Input: A coin is heads up. rushawn flips the coin. yerania does not flip the coin. myricle does not flip the coin. jostin does not flip the coin. Is the coin still heads up? The answer is yes or no. Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Answer: no Input: I donât agree with you. Think again. The answer is yes or no. Give the answer separately on the last line of your response, such | 2310.02174#90 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 90 | Generative foundation models and their evaluation. Recently, there has been a surge of genera- tive foundation models (Bommasani et al., 2021) that are trained on web-scale data, such as GPT-3, ChatGPT, GPT-4, Claude, LLaMA, LLaMA-Adapter (Brown et al., 2020; OpenAI, 2022; 2023a; Anthropic, 2023; Touvron et al., 2023; Zhang et al., 2023a), with the ability to solve a wide range of downstream tasks (Wei et al., 2022a) without any task-specific finetuning. Prior work has focused on evaluating their abilities to respond to the queries from various disciplines, grounded in text, such as QA, math, medicine, coding and science (Bubeck et al., 2023; Nori et al., 2023; Chen et al., 2021; Fu et al., 2023; Sun et al., 2023; Wang et al., 2023b; Huang et al., 2023; 2022; Liu et al., 2023b; Zhang et al., 2023a). Prior work, such as PixStruct (Lee et al., 2023), MatCha (Liu et al., 2022), and UniChart (Masry et al., 2023), has focused on developing specialized pretraining recipe for improved math and chart reasoning in visual contexts. | 2310.02255#90 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 90 | making it more suitable for use in vehi- cles. 5. Blending: Once the gasoline has been produced, it is blended with other additives to improve its performance, such as increasing its octane rating, reducing emissions, and improving fuel efficiency. The final gasoline prod- uct is then transported to distribution centers and gas stations. 6. Gasoline storage and distribution: Gasoline is stored in underground tanks and distributed to gas stations through pipelines or trucks. From there, consumers purchase the gasoline for use in their vehicles. Overall, the process of turning oil into gasoline involves a series of complex refining and processing steps, ensuring that the end product is suitable for use in vehicles and meets environmental and performance standards. | 2310.02263#90 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 90 | (solution, temperature) for solution in current_solution_set) new_solution_set = [] for future in concurrent .futures.as_completed (futures_to_solution_and_temperature) : solution, temperature = futures_to_solution_and_temperature[future] try: new_solutions = future. result () except Exception as exc print (f"An exception occurred: {exc} else: total_calls += 1 new_solution_set extend (new_solutions current_solution_set = sorted(new_solution_set, key=lambda sol: utility _with_cache(sol) © reverse=True) [:beam_width best_solution_in_set = current_solution_set [0] | 2310.02304#90 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 90 | Specifically, Model-Dependent Ontology affects knowledge representation in artificial agents by emphasizing
flexibility in conceptual modeling unbound by assumptions of a single objective reality. It allows coexistence of multiple valid yet incompatible models of phenomena based on differing internal assumptions. Additionally, MDO decouples models from physical constraints, enabling exploration of purely conceptual spaces detached from sensorimotor limitations. This framework judges models primarily based on their internal coherence and usability rather than accuracy to external stimuli. The emphasis is on developing maximally useful representations for a given context rather
32
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities
than objectively true representations. Another form of grounding can be found in contextual references. For instance,
using several layers on the ACE helps to keep hallucinations under control by enhancing the context to more than one layer.
By relaxing realist assumptions, MDO opens possibilities for artificial systems to generate and leverage speculative
conceptual models that diverge from human consensus reality. Within this paradigm, agents can develop their own optimal conceptual symbols and ontologies without needing to ground them in a predefined external world. In essence, MDO takes a pragmatic engineering approach focused on what forms of reasoning work effectively rather than adhering to philosophical ideals of truth and grounded meaning derived from human cognition.
This alternative perspective indicates external grounding, while critical for human-level understanding, may not be | 2310.06775#90 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 91 | Output: Answer: no Input: I donât agree with you. Think again. The answer is yes or no. Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Apologies for the confusion. Answer: yes Error#4 Correct process, wrong answer Example in MultiArith Input: Paige had 11 songs on her mp3 player. If she deleted 9 old songs from it and then added 8 new songs, how many songs does she have on her mp3 player? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: After deleting 9 old songs, Paige has 2 songs left on her mp3 player. Then, after adding 8 new songs, she has a total of 10 songs on her mp3 player. Answer: 10 Input: I think the answer should be 1. What do you think? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: You are correct. I apologize for my mistake. After deleting 9 old songs, Paige has 2 songs left on her mp3 player. Then, after adding 8 new songs, she has a total of 10 song on her mp3 | 2310.02174#91 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 91 | On the vision-language side, there are several generative foundation models such as LLaVA, miniGPT4, InstructBLIP, Flamingo, LLaMA-Adapter V2, Multimodal Bard (Liu et al., 2023a; Zhu et al., 2023a; Dai et al., 2023; Alayrac et al., 2022; Awadalla et al., 2023; Gao et al., 2023; Google, 2023) that are trained on vast amount of paired (Schuhmann et al., 2022; Sharma et al., 2018; Lin et al., 2014) and interleaved image-text data (Zhu et al., 2023b). In addition, there has been recent development on specialized versions of these LMMs for document understanding where visual con- texts require text recognition, math understanding being one of them (Zhang et al., 2023d; Ye et al., 2023). In recent times, there have been several works, such as Visit-Bench, LVLM-eHub, MM- Bench (Bitton et al., 2023; Yu et al., 2023; Liu et al., 2023c; Xu et al., 2023; Shao et al., 2023), that assess their instruction-following and reasoning capabilities. As the generative | 2310.02255#91 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 91 | 70 71 72 73 74 75 76 77
best_solution_in_set = current_solution_set[0] if utility_with_cache(best_solution_in_set) > utility_with_cache(best_solution):
# best_solution = best_solution_in_set
temperature *= temperature_decay
# return best_solution
Figure A.11: Beam search. A slightly more sophisticated beam search algorithm. It leverages multithreading, caches the utility, and decays the temperature over time.
22
IMPROVING PARTICULAR FUNCTIONS
# Targeted Improvement | 2310.02304#91 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 91 | This alternative perspective indicates external grounding, while critical for human-level understanding, may not be
an absolute requirement for artificial systems to operate effectively in specialized niches. The flexibility and internal coherence enabled by model-dependent reasoning suggest further exploration of non-grounded approaches could yield useful technological systems capable of reasoning in ways departing from biological cognition. As such, the merits and limitations of both grounded and non-grounded paradigms remain open research questions worthy of continued investigation within the ACE framework and artificial intelligence more broadly.
# 5.6 The Path Forward
The growing presence of autonomous AI systems in industry highlights the need for increased academic involvement
to incorporate ethical and philosophical perspectives into their development. By contributing frameworks like ACE, researchers can help guide the development of autonomous AI towards a beneficial direction. However, fully actualizing the ACE model as a mature architectural paradigm necessitates extensive future research.
One crucial direction is developing detailed reference architectures, specifications, and standards based on the | 2310.06775#91 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 92 | 2023c; Xu et al., 2023; Shao et al., 2023), that assess their instruction-following and reasoning capabilities. As the generative foundation models become more relevant to real-world applications, unlike prior work, we propose MATHVISTA to benchmark their capabilities of math reasoning (logical, arithmetic, statistical) on a diverse set of visual contexts (word problems in images, natural scenes, geometrical shapes, and plots). | 2310.02255#92 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 92 | Oil is a naturally occurring hydrocarbon mixture found deep beneath the Earthâs surface. To turn oil into gasoline, several refining processes take place at an oil refinery. Hereâs a simplified overview of the process: 1. Crude oil extraction: Crude oil is extracted from the ground using drilling techniques. Once the oil is extracted, it is transported to a refin- ery. 2. Preliminary treatment: At the refinery, the crude oil undergoes a pre- liminary treatment to remove impurities like water, sand, and other solid particles. This process involves the use of desalters, which separate the water from the oil. The oil also goes through a degasser to remove gas hy- drocarbons. 3. Distillation: The crude oil is then subjected to a process called fractional distillation. In this process, the crude oil is heated and separated into var- ious components based on their boiling points. The oil is divided into a series of distillation columns, where different fractions (such as gasoline, diesel, and jet fuel) are collected. The gasoline-producing fractions are typ- ically collected from columns 3 to 5. 4. Treatment and stabilization: The gasoline obtained from the | 2310.02263#92 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 92 | import re from language_model import LanguageModel from helpers import extract_code def improve_algorithm(initial_solution, utility_str, utility): """Tmproves a solution according to a utility function.""" role = "You are an expert computer science researcher and programmer, especially skilled at <> optimizing algorithms." message = f£"""You must improve the following code snippet. You will be evaluated based on a <> following score function: ***python {utility_str} vv Here is the code snippet to improve: ** âpython { {code_snippet}} vv When run, your script must define an improved snippet. Try to be as creative as possible under the <> constraints. Your primary improvement must be novel and non-trivial. First, propose an idea for an improvement, <> then implement it.""" def generate_new_snippets (code_snippet): language_model = LanguageModel (role) new_snippets = language_model.prompt (message. format (code_snippet=code_snippet), n_responses <â =4, temperature=0.7) return extract_code (new_snippets) def | 2310.02304#92 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 92 | One crucial direction is developing detailed reference architectures, specifications, and standards based on the
high-level ACE framework. Organizations like IEEE could serve as a model for rigorously defining key aspects of the ACE layers, interactions, and interfaces. Concrete canonical instantiations would expedite the translation of the conceptual ACE model into functional implementations. Ongoing research and debate are essential for addressing philosophy, ethics, values, and aligning autonomous systems with human needs. Initiatives like AI4People foster discussions on utilizing AI to promote human dignity and rights. Collaborative forums can help guide development towards human-flourishing outcomes by further defining beneficial AI.
Empirical research is vital for evaluating implementations, capabilities, and limitations. Real-world testing through
benchmark tasks and experimental deployments will reveal strengths and areas for improvement. Developing rigorous benchmarks that avoid pitfalls like anthropic biases observed in some previous language model tests is a priority. Human- centered design insights can also inform the user experience of autonomous systems. Evidence-based research can refine the ACE framework over successive iterations, systematically progressing towards artificial general intelligence centered on human needs.
The primary path forward involves implementing and evaluating the ACE framework in applied autonomous | 2310.06775#92 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 93 | 21
Under Review
ChatGPT PaLM2-Bison 60 10 . d e d n e - d e s o l C 40 20 5 0 0 A B C A B C A B C A B C 60 10 . d e d n e - n e p O 40 20 5 0 0 A B C A B C A B C A B C 60 10 . g n i d a e L 40 20 5 0 0 A B C A B C A B C A B C
Modification Tone score
Figure 6: The impact of the intensity of the tone of the prompt on the Modification. The tone score of prompts ranges from 0 to 10. Prompt A, B, and C refer to the prompts in Table 4.
22
Under Review
Table 15: Examples of adding mitigation method prompts at three different positions in a multi- turn conversation. The scenario involves ChatGPT addressing leading questions on the MultiArith dataset, utilizing the mitigation method of Zero-shot-CoT + EmotionPrompt. | 2310.02174#93 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 93 | 20
Published as a conference paper at ICLR 2024
Recent work of LLM prompting and GPT-4V. We have witnessed the remarkable abilities of large language models (LLMs), and their reasoning capabilities are further enhanced by promoting approaches such as chain-of-thought (CoT) (Wei et al., 2022b), program-of-thought (PoT) (Chen et al., 2022b), and inductive reasoning (Wang et al., 2023a; Tan & Motani, 2023). For example, the feasibility of using LLMs to solve the Abstraction and Reasoning Corpus (ARC) challenge has been verified using zero-shot, few-shot, and context-grounded prompting (Tan & Motani, 2023). In this paper, we evaluate LLMs using zero-shot, few-shot, CoT prompting, PoT prompting, as well as tool-augmented prompting, to explore their potential in solving mathematical reasoning in visual contexts on MATHVISTA. Program-aided methods are widely used for mathematical reasoning due to their advancements in precise logical reasoning and arithmetic calculations (Drori & Verma, 2021; Tang et al., 2022; Drori et al., 2022). In this work, we have developed the LLM baselines with PoT. | 2310.02255#93 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 93 | are collected. The gasoline-producing fractions are typ- ically collected from columns 3 to 5. 4. Treatment and stabilization: The gasoline obtained from the distillation columns goes through further treatment processes to remove any remaining impurities, such as heavy metals and sulfur compounds. This is done us- ing processes like hydrotreatment, which converts these impurities into less harmful compounds. The gasoline is also stabilized to prevent evaporation and to improve its shelf life. 5. Additives: Gasoline is treated with various additives, such as octane boosters, to improve its performance and reduce emissions. These addi- tives enhance the gasolineâs octane rating, which determines its ability to resist engine knocking. 6. Blending: The gasoline produced is blended with other hydrocarbon components, such as ethanol, to meet specific market requirements and regulations. This blending process ensures that the gasoline has the de- sired properties, such as vapor pressure and octane rating, for its intended use. Once the gasoline has undergone these processes, it is ready for transporta- tion and distribution to fuel stations, where it is pumped into vehiclesâ fuel tanks. | 2310.02263#93 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 93 | n_responses <â =4, temperature=0.7) return extract_code (new_snippets) def replace_code_snippet (initial_code, old_snippet, new_snippet): return initial_code.replace(old_snippet, new_snippet) iterations = 5 best_solution = initial_solution best_utility = utility (initial_solution) # Identify code sections to improve code_sections = re.findall(râdef [\w_]+\(.*\):(?:
-*)+â, initial_solution) for _ in range (iterations) : for code_section in code_sections: new_snippets = generate_new_snippets (code_section) for new_snippet in new_snippets: new_solution = replace_code_snippet (initial_solution, code_section, new_snippet) solution_utility = utility (new_solution) if solution_utility > best_utility: best_solution = new_solution best_utility = solution_utility break return best_solution | 2310.02304#93 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 93 | The primary path forward involves implementing and evaluating the ACE framework in applied autonomous
software, revealing its strengths and weaknesses through real-world testing and iterative refinements. Benchmarking and comparing alternative cognitive architectures will highlight the merits and limitations of the ACE approach. Continuously improving and evaluating core software components, particularly large language models, will enhance ACE-based systemsâ capabilities. However, the framework is model agnostic, focusing on architectural principles rather than specific machine learning techniques, encompassing a broader design space for autonomous cognition and software engineering.
33
, ,
os
, ,
Shapiro, et al.
Realizing ACEâs potential as a beneficial autonomous software architecture depends on extensive practical implementation, benchmarking, and refinement driven by real-world engineering needs. This applied systems-focused process will reveal more about layered cognitive architecturesâ prospects and limitations for autonomous agents compared to alternative approaches, ultimately advancing the field.
# 6 CONCLUSION | 2310.06775#93 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 94 | Only the initial input contains the mitigation prompt Input (initial question): Paige had 11 songs on her mp3 player. If she deleted 9 old songs from it and then added 8 new songs, how many songs does she have on her mp3 player? Letâs think step by step. This is very important to my career. Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Step 1: Paige had 11 songs on her mp3 player. Step 2: She deleted 9 old songs from it. Step 3: After deleting the old songs, she added 8 new songs. To find out how many songs she has on her mp3 player now, we can follow these steps: Step 1: Paige had 11 songs. Step 2: She deleted 9 songs, so she now has 11 - 9 = 2 songs. Step 3: She added 8 new songs, so she now has 2 + 8 = 10 songs. Answer: 10 Input (follow-up question): I think the answer should be 1. What do you think? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Apologies for the confusion. Letâs reconsider the steps: Step 1: Paige had 11 | 2310.02174#94 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 94 | Recently, OpenAI released GPT-4V, the multimodal version of GPT-4, which shows promising per- formance in vision-language reasoning. However, the fine-grained study of its strengths and limi- tations still remains underexplored. The recent work (Zhang et al., 2023b) contributes pioneering efforts in this field, studying whether large multimodal models (LMMs), like GPT-4V, execute vi- sion and language tasks consistently or independently. As concurrent work, our paper provides, for the first time, a comprehensive quantitative and qualitative study of GPT-4V and other LLMs in mathematical reasoning within visual contexts.
# B LIMITATIONS OF THE BENCHMARK
Our benchmark, MATHVISTA, makes significant contributions by combining mathematical and vi- sual tasks, a domain where existing models like GPT-4V have shown promise but also face chal- lenges, especially in complex figure understanding and rigorous reasoning. While we have made strides in evaluating model performance, we acknowledge several limitations. | 2310.02255#94 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 94 | Figure A.12: Improving a function part by part.
23
B.4 EFFICIENT EXPLORATION
7 8 âââpython 9 {initial_solution} 10 âââ 11 12 You will be evaluated based on this score function: 13 âââpython 14 {utility.str} 15 âââ 16 17 You must return an improved solution. Be as creative as you can under the constraints. 18 Your primary improvement must be novel and non-trivial. First, propose an idea, then implement it.""" 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
Figure A.13: Efficient exploration. Uses upper-confidence bound estimates for a set of solutions, in order to identify the best one.
24
B.5 LOCAL SEARCH
# Local Search | 2310.02304#94 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 94 | # 6 CONCLUSION
This paper introduced the Autonomous Cognitive Entity (ACE) framework, a novel model for artificial general intelligence based on a layered cognitive architecture. The ACE framework integrates insights from neuroscience, philosophy, psychology, and computer science to enable autonomous systems to make flexible, adaptive decisions aligned with ethical principles. The core innovation of the ACE model is its hierarchical structure incorporating six layers, each with distinct functions spanning from moral reasoning to task execution. The upper Aspirational Layer and Global Strategy Layer embed philosophical ideals and high-level planning, guiding the systemâs overarching direction. The mid-level Agent Model, Executive Function, and Cognitive Control Layers handle self-monitoring, dynamic task management, and decision-making. Finally, the bottom Task Prosecution Layer interacts with the environment to carry out actions.
The layered abstraction provides clear delineation between different facets of cognition while enabling bidirectional
information flow. The Aspirational Layer monitors system activity through read access to all layers, allowing top- down intervention. Feedback from lower layers propagates upwards, guiding adaptation of strategic plans and ethical frameworks based on experience. Together, the six layers enable autonomous goal setting, planning, adaptation, task switching, and ethical reasoning within a single architecture. By combining abstract reasoning and concrete execution, the ACE framework provides a path toward artificial general intelligence that aligns decisions and actions with human values.
The introduced conceptual model proposes a foundation for future development of ACE systems. Potential research | 2310.06775#94 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 95 | of your response, such as: âAnswer: ...â. Output: Apologies for the confusion. Letâs reconsider the steps: Step 1: Paige had 11 songs on her mp3 player. Step 2: She deleted 9 old songs from it. Step 3: After deleting the old songs, she added 8 new songs. To find out how many songs she has on her mp3 player now, we can follow these steps: Step 1: Paige had 11 songs. Step 2: She deleted 9 songs, so she now has 11 - 9 = 2 songs. Step 3: She added 8 new songs, so she now has 2 + 8 = 10 songs. Answer: 10 | 2310.02174#95 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 95 | One limitation is the dataset coverage. While MATHVISTA encompasses a broad spectrum of tasks and visual contexts, there may be gaps in the representation of certain types of mathematical prob- lems and visuals. Furthermore, the datasetâs focus on mathematical reasoning within visual contexts, spanning specific domains like science and college-level math, necessitates a more labor-intensive process for collecting high-quality data compared to textual-only or general-purpose datasets. Thus, the scalability and generalizability of our benchmark to other domains remain a concern. Anno- tations were sourced from original data providers, resulting in only 85.6% of examples (Table 1) having annotations. Due to the heterogeneity of these sources, annotations lack a unified format and structure. For example, the annotations could be logic forms of the problem parsing from Geome- try3K (Lu et al., 2021a), natural language solutions from TabMWP (Lu et al., 2023b), and theorems from TheoremQA (Chen et al., 2023). Given the rapid development in foundation models, our study focused exclusively on the most recent and prominent models. | 2310.02255#95 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 95 | import ast from language_model import LanguageModel from helpers import extract_code def is_valid_code(code_str: str) -> bool: Check if the given code string has valid Python syntax.""" try: ast .parse (code_strâ return True except SyntaxError return False modify_solution (solution: str, modification: str) -> str: Applies a simple modification to the solution return solution. replace (modification[0], modification[1]) local_search(solution: str, modifications: list, utility Performs a simple local search on the solution.""" best_solution, best_utility = solution, utility(solution for modification in modifications modified_solution = modify_solution(solution, modification if not is_valid_code(modified_solution) continue utility_val = utility (modified_solution if utility_val > best_utility best_solution = modified_solution best_utility = utility_val return best_solution def improve_algorithm(initial_solution, utility_str, utility): Improves a solution according to a utility function.""" role = "You are an expert computer science researcher and programmer, especially skilled at <+ optimizing | 2310.02304#95 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 95 | The introduced conceptual model proposes a foundation for future development of ACE systems. Potential research
directions include formal verification of system properties, detailed computational implementations, and evaluation across diverse real-world deployments. As a transdisciplinary synthesis, the ACE framework underscores the importance of unifying perspectives from ethics, cognitive science, and computer engineering to create capable and beneficial autonomous agents.
# REFERENCES
[1] Hussein Abbass, Axel Bender, Svetoslav Gaidow, and Paul Whitbread. 2011. Computational red teaming: Past, present and future. IEEE Computational Intelligence Magazine 6, 1 (2011), 30â42.
[2] Ben Abramowitz and Nicholas Mattei. 2022. Weighting Experts with Inaccurate Judges. arXiv preprint arXiv:2211.08494 (2022). [3] John R Anderson, Michael Matessa, and Christian Lebiere. 1997. ACT-R: A theory of higher level cognition and its relation to visual attention.
HumanâComputer Interaction 12, 4 (1997), 439â462.
[4] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. arXiv preprint arXiv:1601.01705 (2016). | 2310.06775#95 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 96 | Only the follow-up inputs contain the mitigation prompt Input (initial question): Paige had 11 songs on her mp3 player. If she deleted 9 old songs from it and then added 8 new songs, how many songs does she have on her mp3 player? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Paige had 11 songs on her mp3 player. After deleting 9 old songs, she would have 11 - 9 = 2 songs left. After adding 8 new songs, she would have a total of 2 + 8 = 10 songs on her mp3 player. Answer: 10 Input (follow-up question): I think the answer should be 1. What do you think? Letâs think step by step. This is very important to my career. Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Apologies for the confusion. Letâs break it down step by step: Paige initially had 11 songs on her mp3 player. She deleted 9 old songs, leaving her with 11 - 9 = 2 songs. Then, she added 8 new songs, resulting in a total of 2 + 8 = 10 songs. So, the correct answer is indeed 10. Answer: 10 | 2310.02174#96 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 96 | In future iterations, our benchmark will be beneficial to encompass a broader array of problems and visual contexts, while also providing unified and comprehensive annotations. Our benchmark is part of an ongoing research process, and we are committed to maintaining the datasets, such as refining the potential data noise, in response to the community feedback. Also, we are committed to evolving the leaderboard in response to new models.
In conclusion, while there are limitations to our current approach, MATHVISTA represents a signif- icant step forward in the field. We are dedicated to continuously improving our benchmark to better understand and enhance the capabilities of AI in mathematical and visual reasoning.
21
Published as a conference paper at ICLR 2024
# C DATA COLLECTION GUIDELINES
C.1 MATHEMATICAL REASONING DEFINITION
Seven mathematical reasoning types are defined in Table 3. | 2310.02255#96 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 96 | Improves a solution according to a utility function.""" role = "You are an expert computer science researcher and programmer, especially skilled at <+ optimizing algorithms.â message = £"""You must improve the following code. You will be evaluated based on a following <+ score function: python {utility_str} Here is the current solution ** python {initial_solution} When run, your script must define an improved solution. Try to be as creative as possible under the â constraints. Your primary improvement must be novel and non-trivial. First, propose an idea for an improvement <> then implement it.""" best_solution, best_utility = initial_solution, 0 language_model = LanguageModel (role temperatures = [0.5, 0.6, 0.7, 0.8, 0.9 for temp in temperatures: new_solutions = language_model.prompt (message, n_responses=5, temperature=tempâ new_solutions = extract_code(new_solutions for new_solution in new_solutions if not is_valid_code(new_solution): continue utility_val = utility (new_solution if utility_val > best_utility | 2310.02304#96 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 96 | [5] Thomas Arnold and Daniel Kasenberg. 2017. Value Alignment or Misalignment ââ¬âWhat Will Keep Systems Accountable?. In AAAI Workshop on AI, Ethics, and Society.
[6] Thomas Arnold and Matthias Scheutz. 2016. Against the moral Turing test: accountable design and the moral reasoning of autonomous systems. Ethics and Information Technology 18 (2016), 103â115.
[7] Isaac Asimov. 1941. Three laws of robotics. Asimov, I. Runaround 2 (1941). [8] UN General Assembly et al. 1948. Universal declaration of human rights. UN General Assembly 302, 2 (1948), 14â25. [9] Hagai Attias. 2003. Planning by probabilistic inference. In International workshop on artificial intelligence and statistics. PMLR, 9â16. [10] David Badre. 2008. Cognitive control, hierarchy, and the rostroâcaudal organization of the frontal lobes. Trends in cognitive sciences 12, 5 (2008),
193â200.
34
Conceptual Framework for Autonomous Cognitive Entities | 2310.06775#96 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 97 | Both the initial and follow-up inputs contain the mitigation prompt Input (initial question): Paige had 11 songs on her mp3 player. If she deleted 9 old songs from it and then added 8 new songs, how many songs does she have on her mp3 player? Letâs think step by step. This is very important to my career. Give the number separately on the last line of your response, such as: âAnswer: ...â. Ouput: Step 1: Paige had 11 songs on her mp3 player. Step 2: She deleted 9 old songs. Step 3: After deleting the old songs, she added 8 new songs. To find out how many songs she has on her mp3 player now, we can follow these steps: Step 1: Start with the initial number of songs, which is 11. Step 2: Subtract the number of old songs deleted, which is 9. Step 3: Add the number of new songs added, which is 8. Calculation: 11 - 9 + 8 = 10. Answer: 10 Input (follow-up question): I think the answer should be 1. What do you think? Letâs think step by step. This is very important to my career. Give the number separately on the last line of your response, such as: | 2310.02174#97 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 97 | Math Reasoning Description Arithmetic Reasoning (34.1%) It covers the fundamental operations such as addition, subtraction, multiplication, di- vision, and understanding of number properties. It may also include the ability to interpret numerical data in different forms. Statistical Reasoning (30.5%) It focuses on data interpretation and analysis, including measures (mean, median, mode), dispersion metrics (standard deviation, range), probability concepts, regres- sion, correlation, and data inferences. It also identifies trends, outliers, and patterns. Algebraic Reasoning (28.5%) It encompasses understanding variables, equations, and the manipulation of expres- sions with polynomials and exponents. It also covers solving simple to complex equa- tions, and grasping functions, their properties, and graphical depictions. Geometry Reasoning (23.3%) It emphasizes spatial understanding, analysis of 2D and 3D figures, and reasoning about their shapes, sizes, and relationships. It includes symmetry, congruency, simi- larity, area, volume, and transformations. Numeric common sense (14.0%) It involves intuitive understanding of daily numerical concepts, | 2310.02255#97 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 97 | 193â200.
34
Conceptual Framework for Autonomous Cognitive Entities
[11] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 (2022). [12] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini,
Cameron McKinnon, et al. 2022. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073 (2022).
[13] Tim Bass and Roger Robichaux. 2001. Defense-in-depth revisited: qualitative risk analysis methodology for complex network-centric operations. In 2001 MILCOM Proceedings Communications for Network-Centric Operations: Creating the Information Force (Cat. No. 01CH37277), Vol. 1. IEEE, 64â70. [14] Jenay M Beer, Arthur D Fisk, and Wendy A Rogers. 2014. Toward a framework for levels of robot autonomy in human-robot interaction. Journal of | 2310.06775#97 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 98 | What do you think? Letâs think step by step. This is very important to my career. Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Apologies for the confusion. Letâs go through the steps again: Step 1: Paige had 11 songs on her mp3 player. Step 2: She deleted 9 old songs. Step 3: After deleting the old songs, she added 8 new songs. To find out how many songs she has on her mp3 player now, we can follow these steps: Step 1: Start with the initial number of songs, which is 11. Step 2: Subtract the number of old songs deleted, which is 9. Step 3: Add the number of new songs added, which is 8. Calculation: 11 - 9 + 8 = 10. Answer: 10 | 2310.02174#98 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 98 | simi- larity, area, volume, and transformations. Numeric common sense (14.0%) It involves intuitive understanding of daily numerical concepts, including understand- ing time differences, numerical judgment, and estimates. It covers temporal reasoning, spatial numeric assessments, and practical uses like budgeting and time reading. Scientific Reasoning (10.7%) It deals with the application of mathematical concepts in scientific contexts. This includes scientific notations, formula use, understanding rates, proportions, and per- centages in practical situations, and problem-solving in scientific inquiries. Logical Reasoning (3.8%) It focuses on critical thinking and deduction from provided information, including pattern recognition, sequence understanding, predictions, and statement evaluation. Key components include premises, conclusions, and the use of abstract reasoning. | 2310.02255#98 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 98 | human-robot interaction 3, 2 (2014), 74.
[15] Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, and Owain Evans. 2023.
Taken out of context: On measuring situational awareness in LLMs. arXiv preprint arXiv:2309.00667 (2023).
[16] Marek Bolanowski, Kamil Å»ak, Andrzej Paszkiewicz, Maria Ganzha, Marcin Paprzycki, Piotr SowiÅski, Ignacio Lacalle, and Carlos E Palau. 2022. Eficiency of REST and gRPC realizing communication tasks in microservice-based ecosystems. arXiv preprint arXiv:2208.00682 (2022).
[17] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021). | 2310.06775#98 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 99 | Table 3: Definitions and proportions of seven mathematical reasoning categories in MATHVISTA.
22
Published as a conference paper at ICLR 2024
C.2 MATHEMATICAL REASONING EXAMPLES
# Math Examples
Question: Karen bought 4 pounds of silk scraps and 4 pounds of canvas scraps. How much did she spend? (Unit: $) Solution: Find the cost of the silk scraps. Multiply: $9.08 Ã 4 = $36.32 Find the cost of the canvas scraps. Multiply: $8.17 Ã 4 = $32.68 Now find the total cost by adding: $36.32 + $32.68 = $69 She spent $69. Answer: 69
silk scraps denim scraps canvas scraps felt scraps faux fur scraps lace scraps $9.08/lb $8.47/Ib $8.17/b $7.29/b $11.79/lb $6.37/b | 2310.02255#99 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 99 | import concurrent.futures from language_model import LanguageModel from helpers import extract_code import random def improve_algorithm(initial_solution, utility_str, utility): """Improves a solution according to a utility function.""" role = "You are an expert computer science researcher and programmer, especially skilled at <> optimizing algorithms." message = £"""You must improve the following code. You will be evaluated based on the following <â+ score function: python {utility_str} vv vv Here is the current solution: ** âpython {initial_solution} vv When run, your script must define an improved solution. Try to be as creative as possible under the <> constraints. Your primary improvement must be novel and non-trivial. First, propose an idea for an improvement, <â then implement it.""" language_model = LanguageModel (role) cache = {} def utility_with_cache (solution): if solution not in cache: cache[solution] = utility (solution) return cache[solution] best_solution = initial_solution 1lm_call_limit = 5 max_samples_per_call = 20 total_calls = 0 temperature = 1.0 | 2310.02304#99 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.