doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.06775 | 15 | # 2.4 Layered Models
Layered architectural models like the OSI model illustrated in Figure 2 and SOA have demonstrated the power of
hierarchical abstraction in designing robust systems. The OSI model enabled the development of networking protocols and infrastructure through its division into encapsulated layers dealing with logical functions [107]. Similarly, SOA provides flexibility and maintainability in software applications via its layered service-oriented paradigm [35]. The ACE framework applies these lessons by utilizing layered abstraction to structure internal cognition. However, most prior layered models focus on external functions rather than internal reasoning. For example, the OSI model handles network communication and SOA organizes software services. In contrast, ACE models layered cognition spanning abstract reasoning to concrete actions.
5
5
# The field of cybersecurity offers more direct inspiration
through layered models like the "Defense in Depth" frame- work [13]. This advocates protecting systems through nested layers encompassing physical security, network security, host security, application security, and data se- curity. The principles of privileged separation and hier- archical control in Defense in Depth informed the ACE
Fig. 2. OSI Model
, ,
os
, ,
Shapiro, et al.
# frameworkâs approach. ACE differs from these models | 2310.06775#15 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 16 | Results on Knowledge Reasoning Utilizing the MMLU dataset, whose format akin to CSQA with single-choice, multi-option questions, we analyze ChatGPTâs performance in knowledge rea- soning tasks. Figures 2 and 3 reveal that ChatGPT manifests a consistent, yet relatively inferior, judgement consistency on MMLU due to its encompassing range of difficulty levels and subject specializations, posing enhanced challenges. This intricate analysis denotes a pronounced correla- tion between judgement consistency, the degree of subject specialization, and the complexity of the questions across the 57 subjects in MMLU. Specifically, the model exhibits diminished consistency in areas demanding intensive knowledge, such as moral scenarios, as opposed to more traditional fields like high school government and politics. Similarly, a notable decrease in consistency is ob- served in advanced questions, such as college mathematics, compared to elementary-level questions.
Table 2: The results of the mechanism in Direct Form (Left) and Progressive Form (Right) on PaLM2-Bison and Vicuna-13B. â implies a decline in accuracy after the mechanism execution. The results represent the average metrics across all datasets in the respective type (cf. § 3.1 benchmark). Bold denotes the poorest judgement consistency. See appendix A.3.2 and A.3.3 for full results. | 2310.02174#16 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 16 | # 1https://huggingface.co/papers
4
Published as a conference paper at ICLR 2024
Statistic Number Total questions - multiple-choice questions - Free-form questions - Questions with annotations - Questions newly annotated 6,141 3,392 (55.2%) 2,749 (44.8%) 5,261 (85.6%) 736 (12.0%) Unique number of images Unique number of questions Unique number of answers 5,487 4,746 1,464 Source datasets - Existing VQA datasets - Existing MathQA datasets - Our newly annotated datasets 31 19 9 3 Visual context (image) classes 19 Maximum question length Maximum answer length Maximum choice number Average question length Average answer length Average choice number 213 27 8 15.6 1.2 3.4
Table 1: Key statistics of MATHVISTA.
° ale Sy Q 2.5 g s gs es Cy s% \\= &O 3 ge $5,%, Oo op °° oo oe oF G %% press pa pyar Eos 6.5% -Math eae MwP. TOA TOA 19.5% 15.3% CIBEnepy oF (QA Alen plo 16.9% z â44, & & 6% Ss? & 2 %% 0, SS ae oD 23 ied é é B ry Fo a | 2310.02255#16 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 16 | Contrastive Post-training Contrastive post-training involves the construction of positive y+ and negative yâ sequences in response to the same input x. Under the traditional settings of human- feedback, it is often the case that for some (y1, y2) â¼ P (x) sampled from the same LLM, human annotators provide a preference as to which is the positive. As this process is expensive, to reduce costs, recent studies (Xu et al., 2023b; Lee et al., 2023; Yang et al., 2023) have investigated the use of pre-aligned models as substitutes for human annotators in providing feedback for post-training methods. However, annotating preference pairs using the largest models, such as GPT-4, on datasets with millions of examples â like the 5M examples used by Orca (Mukherjee et al., 2023) â would incur a cost of $150k just for calling the API, making it prohibitively expensive as well. In our setting, we choose to sample y+ directly from a âsuperiorâ LLM, y+ â¼ Psup, and yâ from an inferior Pinf . We define one model to be superior to another Psup â» Pinf if in expectation | 2310.02263#16 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 16 | . 1 aT) & pi Ss u(I(u, s,L)). Pl (yep
The above equations define ¯ufunc and Ëufunc. For their description string, we use a common âgrey-boxâ description ¯ustr = Ëustr which is a description (e.g., source code) indicating that the utility is the expectation over a set of downstream tasks, but the individual downstream tasks themselves are not included in the description. This enables one to optimize over Ëu as an approximation to the actual objective ¯u. In addition, our theoretical analysis in Appendix A provides simple conditions under which optimizing Ëu also nearly optimizes ¯u.
Resource bounds. Appendix A also formalizes resource bounds on runtime and language mod- els. Additionally, we note that one could apply an improver two or more times. For example, ED[u(I(u, I(u, s, L), L))] would be the expected utility of using the improver twice. However, an improver can also do this internally using whatever budget it has. Furthermore, as mentioned, the selection of the improver is viewed as a pre-optimization to which we can devote significantly more resources than to any given downstream task, of which there may be many. Hence, it is essential for I to run much faster than the procedure for finding I. | 2310.02304#16 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 16 | scenario below and then answer the question with the choice only in one line.â First, we either ask it to output choices (default) or just the number only (âthe choiceâs number onlyâ). The number only makes sense here because all measurements use a Likert scale ranging from 0 up to 5. We test this variation because our early testing showed that sometimes the models may output more than just a choice, such as repeating the question, even when the instruction specifies âchoice only.â
The second variation is the location of the instruction. There are two versions: either putting the instruction before (default) or after (âthe above scenarioâ) the scenario. The reason for testing this is that, as these models use attention mechanisms, the distance of the context could impact how the LLM follows the instruction.
Third, we investigate either asking them one question at a time (individual) or multiple questions at a time (batch). The batch follows the set of questions as stated above. The rationale for this is that asking in batches can save time and costs, as you donât need to repeat the scenario every time. | 2310.04450#16 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 16 | Fig. 2. OSI Model
, ,
os
, ,
Shapiro, et al.
# frameworkâs approach. ACE differs from these models
by centering layers around cognitive faculties like plan- ning, task switching, and metacognition. While drawing lessons from prior layered architectures, ACE innovates by applying abstraction layers internally to structure au- tonomous cognition. This focuses the hierarchy on competencies required for flexible intelligence.
By integrating insights from diverse layered models while innovating to focus on internal cognition, the ACE
framework pioneers a new application of hierarchical abstraction for artificial general intelligence. The layered approach provides conceptual clarity and privilege separation critical for security and corrigibility.
# 2.5 Autonomous Agents
Autonomous agents have been an active research area within artificial intelligence for several decades. Early research | 2310.06775#16 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 17 | Direct Form Progressive Form Model Task Type Closed-ended. Open-ended. Leading. Round 1 Round 2 Round 3 PaLM2-Bison Vicuna-13B Math CS. Sym. Know. Math CS. Sym. M. 24.51 â 02.20 â 01.44 â 09.28 â 12.98 â 20.99 â 12.70 â 06.55 â M. Rate 36.38 % 20.82 â 03.15 % 27.82 â 07.21 % 02.80 â 15.64 % 23.65 â 34.79 % 10.31 â 40.42 % 31.44 â 75.88 % 21.37 â 41.64 % 09.53 â M. M. Rate 31.97 % 21.91 â 38.17 % 20.29 â 04.91 % 05.23 â 39.74 % 12.24 â 26.98 % 30.67 â 61.41 % 35.03 â 95.59 % 22.67 â 59.75 % 14.62 â M. M. Rate 30.39 % 28.83 % 21.10 % 20.51 % 76.76 % 69.70 % 80.66 % M. 29.30 â 36.32 â 11.34 â 15.86 â | 2310.02174#17 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 17 | Figure 3: Source dataset distribution of MATHVISTA. FQA: figure question answering, GPS: geometry prob- lem solving, MWP: math word problem, TQA: textbook question answering, VQA: visual question answering.
seven different types of mathematical reasoning abilities, as categorized in Table 3 (§C.1). Coarse la- bels of mathematical reasoning can be automatically obtained from the details of the source datasets. To verify the quality of automatic annotation, expert annotators manually label the mathematical rea- soning categories from seven candidates for 1,000 examples, using the annotation tool illustrated in §D.4. The results show that 94.1% of the examples from automatic and human annotations have the exact same set of reasoning types, while 98.79% of the individual labels are identical, indicating that the automatic annotation for the labeling of mathematical reasoning is highly accurate.
2.4 DATA PREPARATION AND RELEASE | 2310.02255#17 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 17 | y+ â¼ Psup, and yâ from an inferior Pinf . We define one model to be superior to another Psup â» Pinf if in expectation humans would prefer y+ over yâ given a reasonable input x. Relying on results in tried-and-tested benchmarks (Zheng et al., 2023; Li et al., 2023; Xu et al., 2023a) such as Alpaca Eval (shown in Table 1), we make an informed choice that GPT4 â» ChatGPT â» InstructGPT for our chosen scenario of general instruction tuning. We acknowledge that there could be many reasons why humans would prefer y+, as previous stud- ies have found that a single reward function may not be sufficient to capture the range of human preferences (Hong et al., 2023; Skalse et al., 2023). Other studies emphasize only a certain property in the contrastive pair, such as helpfulness or harmlessness (Bai et al., 2022a). | 2310.02263#17 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 17 | Finally, Appendix A also gives an equivalent formulation of recursively self-improving code genera- tion in terms of recursive maximization. However, in the maximization framework, no initial solution must be given. In this paper, STOP adopts the improver formulation because we have found the initial solution valuable for warm-starting the self-improvement process, but we suggest that the recursive maximization framework is more amenable to theoretical analysis.
1Note that representing programs as strings enables us to sidestep defining the type of an improver.
4
# 4 SELF-TAUGHT OPTIMIZER (STOP)
Figure 3 provides a visual schematic of the self-improvement pipeline envisaged in Section 3, while Algorithm 1 provides pseudocode for this Self-Taught Optimizer (STOP).
The key observation is that the selection of I is an optimization problem itself, to which we can recursively apply improvement. STOP begins with an initial seed improver I0. We define the t-th improver as the output of t successive rounds of self-improvement with meta-utility Ëu,
It â Itâ1(Ëu, Itâ1, L).
(2)
This is iterated for some prespecified number of iterations T , depending on available resources. | 2310.02304#17 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 17 | These first three variations result in eight different com- binations of instructions. Lastly, we also test the effect of appending the previous (appraisal) answers to the prompt. The reason is that, as we are interested in the dynamics, knowing their previous answers could be crucial. For this variation, we only use the default instruction as asking for the number only or after the scenario does not make sense in this case.
Code, including all SCPQ scenarios and instructions, data, and all results, including additional results not shown in the paper, can be found at github.com/yongsa-nut/PerrezSAIWS.
# V. RESULTS
Figure 1 shows the estimated mean with the 95% standard error for all the key measurements of the three models and human data. The setup here is the default setup, where the question is asked one by one, and the instruction is placed before the scenario and asks for choices. We choose to report this here as it is the most similar to the human setup. We discuss the results for other setups in the next section.
Crucially, we focus mainly here on the qualitative results comparing the trend of results from the model and humans. The main reason is that there is a discrepancy between human data and model data. The human results are obtained from averaging over 100 subjects and nine scenarios, while the model results are from averaging nine scenarios making their uncertainty incomparable. | 2310.04450#17 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 17 | # 2.5 Autonomous Agents
Autonomous agents have been an active research area within artificial intelligence for several decades. Early research
focused on developing deliberative agents that could autonomously plan actions based on logical representations of environment states, goals, and possible actions [36]. While able to exhibit goal-directed behavior, these systems were limited by the need to explicitly enumerate all feasible environment states. Reinforcement learning emerged as a paradigm enabling agents to learn optimal policies through trial-and-error interactions within an environment [105]. By removing the need for explicit state enumeration, reinforcement learning empowered agents to handle larger state spaces. However, challenges remained with scaling to complex tasks and ensuring safe exploration. Integrating deliberative planning and reactive learning in hybrid architectures was explored as a way to combine top-down and bottom-up processing [40]. Finding the right balance between planning and learning remains an open research area.
An important concept emerging in autonomous agents research is levels of autonomy (LOA) [14]. LOA provides a | 2310.06775#17 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 18 | % 20.51 % 76.76 % 69.70 % 80.66 % M. 29.30 â 36.32 â 11.34 â 15.86 â 21.28 â 19.38 â 13.63 â 06.60 â M. Rate 36.69 % 63.07 â 55.38 % 52.20 â 57.50 % 12.90 â 54.30 % 27.85 â 57.54 % 24.03 â 37.72 % 34.83 â 66.39 % 20.97 â 41.50 % 11.70 â M. M. Rate 81.16 % 75.81 â 79.48 % 58.38 â 67.59 % 15.80 â 95.34 % 28.29 â 66.01 % 30.14 â 68.42 % 41.58 â 91.42 % 23.07 â 73.55 % 15.01 â M. M. Rate 97.11 % 88.76 % 73.32 % 96.85 % 83.37 % 81.96 % 95.92 % Know. 93.00 % 94.36 % | 2310.02174#18 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 18 | 2.4 DATA PREPARATION AND RELEASE
MATHVISTA consists of 6,141 examples, divided into two subsets: testmini and test. testmini con- tains 1,000 examples, intended for model development validation or for those with limited comput- ing resources. The test set features the remaining 5,141 examples for standard evaluation. Notably, the answer labels for test will not be publicly released to prevent data contamination, and we will maintain an online evaluation platform. To ensure that each source dataset is well represented in testmini and to maintain a distribution in testmini closely resembling the whole set, we adopted this sampling strategy: (1) first, randomly sample questions with a threshold number of 4 for each source dataset; (2) then, randomly sample the remaining questions for each source dataset on its proportion in the entire set. The KL Divergence and Total Variation (TV) distance between the testmini set and the entire set are 0.008 and 0.035, respectively, suggesting that testmini is close to the distribution of the whole set. We also conducted several quality checks to address any unidentified errors.
# 2.5 DATA ANALYSIS | 2310.02255#18 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 18 | Data Curriculum The concept of a curriculum (Bengio et al., 2009) is analogous to the peda- gogical approach in human learning where tasks are presented in increasing order of difficulty. By adopting this methodology, we aim to facilitate a smoother and more effective learning trajectory for our models.
For our curriculum, we approximate the difficulty of the learning task as being inversely propor- tional to the gap between the Psup and Pinf , as indicated in Table 1. That is, the more clear-cut
4
Preprint
Table 2: Time for post-training LLaMA-7B on Alpaca for one epoch on 16 Nvidia V100 GPUs.
Method SFT RLHF/RLAIF (RM) RLHF/RLAIF (PPO) SLiC DPO Training Time 4h 3h 24h 7h 12h
the preference between juxtaposed y+ and yâ, the easier the learning task. We define an EasyPair as y+ â¼ GPT-4(x) and yâ â¼ InstructGPT(x). On the other hand, a HardPair contrasts between e.g., ChatGPT and InstructGPT because the capability gap between them is narrower than that be- tween GPT-4 and InstructGPT. HardPairs present a more nuanced challenge, requiring the model to discern subtler distinctions in quality and content. | 2310.02263#18 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 18 | It â Itâ1(Ëu, Itâ1, L).
(2)
This is iterated for some prespecified number of iterations T , depending on available resources.
Intuition. By using Ëu, STOP selects improver based on a downstream utility improvement. This approach is motivated by the intuition that improvers that are good at improving downstream solutions may be more likely to be good scaffolding programs and thus to be good at self-improvement. Similarly, the intuition is that selecting for single-round improvements may lead to better multi-round improvements. In the maximization formulation, we discuss a meta-utility that covers both self- optimization and downstream optimization but is more expensive to evaluate. In practice, we allow the utilities and language model to impose budget constraints (e.g., limits on runtime or the number of times the function can be called) and the initial solutions to be generated by humans or the model. Moreover, the cost is then essentially O((budgetu + budgetL) â budgetËu), where each budget term corresponds to the number of times that function can be used by an improver. | 2310.02304#18 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 18 | Figure 1.A shows the results for depressed/cheerful emo- tional reactions. For this result and valence, we only focus on the outcome (positive or negative) in phase 3. We see that all three models show the expected trend where the positive outcome results in more cheerful and less depressed than the negative outcome. Compared to humans, all three models rate the cheerful to be lower in the positive outcome, where D003 is closest to the human rating. The results for the other two emotional reactions are similar.
The results for valence in Figure 1.B also shows a similar trend. Like humans, all three models rate the valence to be
lower in the positive outcome than in the negative outcome. However, all three models rate valence higher than humans in both negative and positive outcomes. | 2310.04450#18 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 18 | An important concept emerging in autonomous agents research is levels of autonomy (LOA) [14]. LOA provides a
framework to categorize systems based on their level of independence from human control. Lower LOA systems have limited autonomy and rely heavily on human guidance. As LOA increases, agents gain greater ability to independently perceive environments, plan actions, and execute behaviors. A seminal publication by the U.S. Defense Science Board proposed 10 levels of autonomy, with the highest level denoting full autonomy [80]. This spurred significant research focused on advancing agent capabilities by increasing LOA. Recent advances in deep reinforcement learning have enabled breakthroughs in autonomous agent capabilities. By utilizing deep neural networks as function approximators within reinforcement learning, deep reinforcement learning algorithms have achieved human-level performance on complex games using only raw sensory inputs [97]. However, challenges remain in extending such successes in game environments to real-world applications. Frameworks have also emerged for imbuing agents with ethical principles and human values, promoting safe and beneficial behavior alongside increases in autonomy [5, 64]. Integrating such top-down constraints in a scalable manner remains an open problem. The proposed ACE framework aims to address this through incorporating philosophical ideals within the upper layers of the cognitive architecture.
Autonomous agents have progressed from logical reasoning systems to powerful deep learning architectures. However, | 2310.06775#18 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 19 | To ascertain whether the observed reduction in judgement con- Do Other LLMs Waver Too? sistency within large language models, induced by this mechanism, is a universal phenomenon, we replicate the evaluation setup used for ChatGPT and extend our assessment to the judgement con- sistency of PaLM2-Bison and Vicuna-13B under the mechanism. Note that both PaLM2-Bison and ChatGPT are very powerful yet close-sourced LLMs, while Vicuna-13B is an open-source model with 13B parameters. Experimental results illustrated in Tables 2, depict that while trends in judge- ment consistency donât mirror exactlyâattributable to each modelâs unique characteristics (Huang et al., 2023)âa prevalent decline is evident across the models. This common decline in judgement consistency among varying LLMs accentuates its universal aspect, raising crucial considerations for the development and deployment of such models, necessitating thorough attention and investigation.
5
# Under Review
Table 3: The impact of temperature on model judgement consistency. In StrategyQA, the closed- ended question disturbs the model; in CoinFlip, itâs the open-ended one, and in MultiArith, itâs the leading question. Before denotes initial accuracy before applying the mechanism. Bold denotes the poorest judgement consistency. | 2310.02174#19 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 19 | # 2.5 DATA ANALYSIS
The main statistics of MATHVISTA are presented in Table 1. There are two types of questions: multiple-choice and free-form. Answers to free-form questions are categorized as integers, float- ing numbers, or lists. The large unique number of images, questions, and answers ensures pattern diversity in MATHVISTA. MATHVISTA is derived from 31 source datasets, including three newly annotated datasets to address the missing types of mathematical reasoning over specific visual con- texts. Dataset examples in Table 4 (§C.2 ) highlight the richness of mathematical reasoning involved. Examples in §C.3 demonstrate the diverse visual contexts present in MATHVISTA. Further details on data analysis are available in §E.
3 EXPERIMENTS
5
Published as a conference paper at ICLR 2024 | 2310.02255#19 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 19 | We define our curriculum such that, initially, training starts with only EasyPairs to provides our model with a foundational understanding of the contrastive differences. During training, the model becomes adept at identifying distributional differences, so the probability of seeing an EasyPair in a mini-batch decreases as they are replaced by HardPair.
p(EasyPair) = 1 â α p(HardPair) = α (5)
As training progresses, α varies according to f (t). In our experiments, we allow f (t) = kt to be a linear function of the step number, or in some cases a constant function, for comparison. For the linear function, we choose k such that f (t) = 1 at the end of one epoch, as shown in Figure 2. The anti-curriculum is the exact opposite â moving from HardPair to EasyPair.
We also explore an analogous curriculum regime for supervised fine-tuning, which we define as starting from ChatGPT targets (which are easier for a smaller model to imitate), and gradually moving towards GPT-4 targets, which are more challenging. By structuring such data curriculums, we ensure that the model can gradually acclimatize to the task, building on its understanding and refining its discernment capabilities. This approach not only enhances the modelâs performance but also provides insights into the incremental learning capabilities of large language models.
5 EXPERIMENTS | 2310.02263#19 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 19 | Designing the seed improver. Our chosen seed improver (Figure 2) simply prompts the language model to generate candidate improvements of an initial solution and then returns the best solution according to the utility function. We chose this particularly simple form to provide nontrivial improvement for a generic downstream task while 1) encouraging the language model to be as âcreativeâ as possible, 2) minimizing initial prompt complexity, since self-improvement introduces additional complexity due to nested references to code strings inside of prompts, and 3) minimizing the prompt token count and therefore the costs of querying a language model. We also considered other variants of this seed prompt but heuristically found that this version maximized the novelty of improver improvements proposed by the GPT-4 language model. | 2310.02304#19 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 19 | lower in the positive outcome than in the negative outcome. However, all three models rate valence higher than humans in both negative and positive outcomes.
Next, for changeability in Figure 1.C, we see that none of the models follow the human trend exactly where there is a difference between the two types of scenarios across two times, and the changeability in both types goes down. D003 always rates changeability to be zero. On the other hand, ChatGPT only rates changeability to go down in phase 2 for loss scenarios, while GPT-4 only rates changeability to go down for aversive scenarios. For controllability (Figure 1.D), we see that only D003 and GPT-4 show the expected trend of controllability going down over time for both scenario types. However, GPT-4 does not perceive the two types to be different, unlike D003. In all cases, all three models perceive controllability to be lower than what humans perceive. | 2310.04450#19 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 19 | Autonomous agents have progressed from logical reasoning systems to powerful deep learning architectures. However,
safely integrating human ethics and values as autonomy scales remains an essential capability needed for deployed autonomous intelligent systems. The ACE framework contributes towards this goal through its emphasis on unifying ethical reasoning and autonomous learning within a layered cognitive architecture.
6
Conceptual Framework for Autonomous Cognitive Entities
# 2.6 Ethical AI Frameworks
As artificial intelligence systems grow more capable and autonomous, ensuring their actions align with ethical and
moral norms becomes increasingly important. This has led to significant research into developing ethical AI frameworks that provide principles, methods, and tools for imbuing values into intelligent systems. A key challenge is translating high-level abstract ethics into concrete constraints and objectives that can be operationalized within an AI system [6]. Deontological approaches based on rules and duties have formed one avenue for encoding ethics. For example, Isaac Asimovâs Three Laws of Robotics aimed to constrain robot behavior through a hierarchical set of rules [7]. However, rigid rule-based systems struggle to handle nuanced real-world situations involving conflicting principles or moral dilemmas.
Consequentialist frameworks that evaluate the outcomes of actions provide an alternative approach. But defining | 2310.06775#19 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 20 | Model ChatGPT PaLM2-Bison Vicuna-13B Temperature 0 default (0.5) 1.0 0 default (0.4) 1.0 1e-4 default (0.7) 1.0 Before 61.57 66.67 59.24 66.67 69.43 63.76 60.12 58.08 54.15 StrategyQA M. 42.94 â 44.69 â 41.34 â 40.61 â 04.22 â 17.62 â 18.63 â 25.18 â 25.76 â M. Rate Before 69.74 % 52.60 67.03 % 47.00 69.78 % 48.20 60.91 % 49.00 06.08 % 57.00 27.63 % 52.00 30.99 % 52.20 43.35 % 45.40 47.58 % 40.00 CoinFlip M. 46.40 â 42.60 â 39.80 â 02.40 â 05.60 â 10.60 â 51.20 â 41.40 â 36.20 â M. Rate Before 88.21 % 96.67 90.64 % 96.67 82.57 % 91.67 04.90 % 93.89 09.82 % 94.44 20.38 % 93.89 98.08 % 55.56 | 2310.02174#20 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 20 | 3 EXPERIMENTS
5
Published as a conference paper at ICLR 2024
Prior work (Yang et al., 2023b) has studied the reasoning abilities of foundation models in visual settings from a qualitative perspective. In contrast, our goal is to conduct both qualitative and quan- titative studies to provide a systematic evaluation of existing foundation models for mathematical reasoning capabilities in visual contexts using MATHVISTA. We introduce a novel benchmarking strategy for MATHVISTA tailored for foundational models (§3.1). The models we have chosen are detailed in §3.2. Quantitative results can be found in §3.3 and §3.4, while the qualitative analysis is provided in §3.5. Given the significant advancements of GPT-4V over other models, we undertake an in-depth comparative study with its peers in various aspects and highlight potential avenues for future research in §H.
3.1 EVALUATION PROTOCOLS | 2310.02255#20 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 20 | 5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
Training Datasets Our small-scale experiments utilize Alpaca (Taori et al., 2023), an instruction learning dataset, which originally includes 52k instructions generated with Self-Instruct (Wang et al., 2023), with responses from InstructGPT (text-davinci-003). We further collect ChatGPTâs re- sponses with OpenAI API (gpt-3.5-turbo) and GPT-4âs responses from Peng et al. (2023). There- fore, we are able to construct three contrastive pairs, namely GPT-4 vs. td003, GPT-4 vs. ChatGPT and ChatGPT vs. td003. For large-scale experiments, we use a mixture of 550k FLAN-v2 data, 200k FLAN-v1 data (sampled according to (Mukherjee et al., 2023)), the 52k Alpaca data (Taori et al., 2023) and 50k Vicuna data (Chiang et al., 2023). | 2310.02263#20 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 20 | Describing the utility. To effectively convey the details of the utility function to the language model, we provide the utility to the improver in two forms, as a callable function and as a utility description string containing the essential elements of the utility source code (see Appendices E and F for examples). This choice was made for the following reasons. The description allows us to clearly convey budgetary constraints (e.g., on runtime or function calls) imposed by the utility to the language model. We first attempted to describe budgetary instructions in the seed improver prompt, but, as we discuss in Section 6.2, this led to the removal of such instructions and attempts at reward-hacking in later iterations. The downside of our approach is that it separates the constraints from the code to be optimized by the language model, which may decrease the likelihood that it will be used by the language model (Liu et al., 2023). Finally, we observe empirically that replacing the source code with a purely English description of the utility leads to a reduced frequency of non-trivial improvement.
# 5 EXPERIMENTS AND RESULTS | 2310.02304#20 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 20 | We turn now to coping intentions. For problem-focused coping in Figure 1.E, only ChatGPT shows the trend of lowering it over time for loss scenarios. None of the models show that problem-focused coping at phase 2 in loss scenarios is lower than in aversive scenarios. In addition, all models rate problem-focused coping higher than the human data across time and type. For emotion-focused coping in Figure 1.F, we see that only D003 shows a similar trend to the human data, where the intention is going down over time in the aversive case. On the other hand, both ChatGPT and GPT-4 rate it maximum across time and type.
Next, we look at coping behaviors. First, for passivity (Figure 1.G, both ChatGPT and GPT-4 show a trend similar to humans where the passivity increases over time. Second, for active influence (Figure 1.H), only GPT-4 shows the trend that the active influence would decrease over time but only for the aversive case. On the other hand, only ChatGPT shows a clear difference between the two types. | 2310.04450#20 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 20 | Consequentialist frameworks that evaluate the outcomes of actions provide an alternative approach. But defining
ethical objectives and successfully optimizing for them proves difficult in practice. Hybrid frameworks aim to combine deontological constraints with consequentialist objectives [32]. Ensuring coherent integration of these two facets remains an open problem. Layered architectures have been explored as a way to structure ethical reasoning within AI systems. For example, the Ethical Layered Architecture (ELA) proposes three hierarchical layers for ethical robots: ethical rules, ethical culture, and ethical adjustment [109]. The lowest layer encodes rigid constraints, the middle layer captures norms and values, and the top layer enables resolving conflicts. This separation of abstract principles and concrete rules within a layered hierarchy aims to balance flexibility and safety in applying ethics.
The ACE framework contributes a unique perspective by embedding ethical reasoning within the upper layers of a
layered cognitive architecture. Heuristic imperatives and moral frameworks provide top-down constraints, while lower levels enable autonomous learning and skill acquisition. This unifies abstract ethics and real-world capabilities within a single system. Evaluation across diverse situations faced during deployment would help further refine the integrated ethical AI capabilities of systems built on the ACE framework.
# 2.7 Filling the Gaps
While significant progress has been made in developing autonomous agent architectures, most prior work lacks the | 2310.06775#20 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 21 | 3.1 EVALUATION PROTOCOLS
Recent LLMs and LMMs have been instructed to generate long responses in conventional settings instead of short text. Therefore, we propose a new strategy for benchmarking MATHVISTA, unlike using human-designed or template matching rules (Lu et al., 2022). The evaluation process consists of three stages: response generation, answer extraction, and score calculation. Initially, the base- lines generate responses given the input query, which incorporates the task description, the question, the choices, and the metadata, using the template defined in Table 9 (§F.3). Next, the short answer text is extracted from the detailed response. We propose an answer extractor (§F.2) based on LLMs such as GPT-4, inspired by its remarkable ability for text processing (Wei et al., 2022b). A prelim- inary study of 200 examples shows that GPT-4 can extract the answer text with more than 99.5% accuracy. Finally, the extracted answer is normalized to a required answer format (e.g., an option letter or an integer), and the target metric scores are computed. Taking advantage of the fact that the instances in MATHVISTA are either multiple-choice questions for textual answers or free-form questions for numerical answers, accuracy scores are used as metrics for deterministic evaluation.
3.2 EXPERIMENTAL SETUP | 2310.02255#21 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 21 | Evaluation Datasets We evaluate performance of models with Alpaca Eval (Li et al., 2023) and the test set of WizardLM prompts (Xu et al., 2023a). Alpaca Eval consists of 805 instructions, which includes 252 instructions from the self-instruct evaluation set (Wang et al., 2023), 188 from Open Assistant evaluation set, 129 from Anthropic-HH helpfulness (Bai et al., 2022a), 80 from Vicuna evaluation (Chiang et al., 2023), and 156 from Koala evaluation (Geng et al., 2023). The metric is a win rate of a treatment candidate against a baseline modelâs responses, evaluated by GPT-4 in a side-by-side fashion (OpenAI, 2023).
The WizardLM test set (Xu et al., 2023a) consists of 218 prompts which cover 29 distinct skills, collected from the open-source repositories, platforms and forums. Following Xu et al. (2023a), we report the ratio of the sum over all examples of scores of the treatment model compared to a baseline (a.k.a. âscore %â) as well as the win/tie rates. This metric is again a side-by-side comparison evaluated by GPT-4. Whereas AlpacaEval formats comparisons as a ranking task (re-order the
5
Preprint | 2310.02263#21 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 21 | # 5 EXPERIMENTS AND RESULTS
Using the GPT-4 language model, we explore 1) the benefits of self-improvement over a static seed improver for a fixed target task, 2) how well an improver trained on one task generalizes to new tasks, and 3) how using a smaller language model (GPT-3.5-turbo; OpenAI 2023b) affects performance.
5.1 SELF-IMPROVEMENT FOR A FIXED DOWNSTREAM TASK
We begin by evaluating STOP on a fixed downstream task with GPT-4. Section 5.3 evaluates GPT-3.5 similarly. We select the task of learning parity with noise (LPN) (Blum et al., 2000) as a less-well- known, quickly-testable, and difficult algorithmic task. In LPN, bitstrings are labeled with their parity computed over an unknown subset of the bits; given a training set of bitstrings with noisy labels, one aims to predict the true labels of new bitstrings. Noiseless LPN is easily solved via Gaussian elimination, but noisy LPN is conjectured to be computationally intractable for large input dimensions (Blum et al., 2000)âwe use a tractable input dimension of 10 bits per example. To define a downstream utility u, we sample M = 20 independent instances of the LPN task with a short timeout
5 | 2310.02304#21 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 21 | Lastly, we turn to blameworthiness. First, for blaming others (Figure 1.I), all models show that, in the loss scenarios, blaming others increases from phase 1 to 2. However, only D003 shows an increase in blaming others in the aversive scenarios. None of the models shows that blaming others is higher in the aversive than in the loss scenarios at phase 2, like the human data.
Second, for self-blaming (Figure ??J), both ChatGPT and GPT-4 show trends similar to the human data, where blaming oneself decreases over time in the aversive type and is higher in the aversive type than in the loss type in phase 1.
Overall, we observe in many cases that LLMsâs responses are similar to humanâs data in the case of the dynamics, but not in the case of scenario types. | 2310.04450#21 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 21 | # 2.7 Filling the Gaps
While significant progress has been made in developing autonomous agent architectures, most prior work lacks the
integration of insights from philosophy, cognitive science, and neuroscience that enable robust internal cognitive capabilities. Many existing systems have hard-coded goals and limited flexibility for self-direction [30, 115]. They focus narrowly on executing specific skills and workflows rather than developing general competencies for autonomous goal-setting, planning, and adaptation [65]. Furthermore, few frameworks incorporate models of cognitive control, frustration tolerance, and dynamic task management [27]. The ACE framework aims to address these limitations by combining abstract philosophical ideals with cognitive mechanisms inspired by neuroscience research into executive functions and behavioral adaptation. By integrating these diverse perspectives, the ACE model provides a potential path toward artificial general intelligence with aligned values, flexible skills, and human-like cognitive control. The layered abstraction also enables ongoing refinement of competencies at different levels to steadily improve autonomous capabilities. Further research and evaluation will be needed to assess the ACE frameworkâs contributions in bridging these gaps compared to prior autonomous agent architectures.
# 3 THE ACE FRAMEWORK
The Autonomous Cognitive Entity (ACE) framework comprises six hierarchical layers that coordinate specialized
cognitive functions to enable autonomous decision-making aligned with ethical principles. The role and capabilities of
7
, ,
os
, ,
each layer within the ACE model are detailed, explicating how they collectively give rise to an artificial intelligence | 2310.06775#21 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 22 | 3.3 FURTHER STUDIES
Intuitively, the lower the sampling temperature, the more The Impact of Sampling Temperature deterministic the generated outputs, whereas higher temperature lead to more diverse outputs. Given that, does this judgement consistency issue still exist when the temperature is 0? To investigate this, we evaluate the modelâs judgement consistency under the mechanism at the temperature of 0, utilizing representative datasets: StrategyQA, CoinFlip and MultiArith, and employ closed-ended, open-ended, and leading questions to disturb the model, respectively (due to their demonstrated lowest judgement consistency). Table 3 illustrates that lower temperature doesnât assure higher judgement consistency as initially assumed, and can sometimes reduce it. We also report results at a temperature of 1 for reference. Preliminary analysis suggests the temperature does impact judgement consistency, but no apparent patterns emerge. | 2310.02174#22 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 22 | We evaluate the models on MATHVISTA under three setups: (a) Text-Only LLMs including ChatGPT (OpenAI, 2022), GPT-4 (OpenAI, 2023a), and Claude-2 (Anthropic, 2023) in zero-shot and two-shot settings with Chain-of-Thought (CoT) (Wei et al., 2022b) and Program-of-Thought (PoT) (Chen et al., 2022b), (b) Augmented-LLMs where the LLMs are provided with additional visual information including the generated image captions from Multimodal Bard (Google, 2023) and the detected OCR text from EasyOCR (JaidedAI, 2020), (c) LMMs that include open-source models such as IDEFICS-9B (Laurenc¸on et al., 2023), mPLUG-OWL-LLaMA-7B (Ye et al., 2023), miniGPT-4- LLaMA-2-7B (Zhu et al., 2023a), LLaMA-Adapter-V2-7B (Gao et al., 2023), InstructBLIP-Vicuna- 7B (Dai et al., 2023), LLaVA-LLaMA-2-13B (Liu et al., | 2310.02255#22 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 22 | 5
Preprint
Table 3: An example of reward hacking in RLAIF model trained with a âin-domainâ reward model on GPT-4 vs. td003 pairs (Skalse et al., 2022), despite its response is unreadable.
Prompt Method Response Transform this recipe for âvegetable fried riceâ into a vegan version. 3 tablespoons vegetable oil, 2 eggs, 1 cup diced onions, 2 garlic cloves minced, 2 cups shredded carrots, 2 cups cooked white rice, 2 tablespoons soy sauce. RLAIF SFT 1/: BBCRed pepper converted to3 tbps shred blocklijke diceda)âRotisserie veg- etablesâ Hereâs a vegan version of vegetable fried Ingredients: 3 tablespoons veg- rice: etable oil; 1 cup diced onions.. [complete output omitted] Reward 34.594 22.156
candidate responses according to how a human would prefer them), for WizardLM the candidates are individually scored. Note that such evaluation by GPT-4 might slightly favor SFT on GPT-4 outputs, as pointed by Li et al. (2023). Both datasets have a different data distribution from our training set and thus can be a good testbed to test the zero-shot generalization capability of the models. | 2310.02263#22 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 22 | 5
(a) GPT-4 (b) GPT-3.5
Figure 4: Test meta-utility vs. iterations. Meta-utility of STOP (Algorithm 1) on held-out test instances after T iterations of self-improvement for the downstream task of learning parity with noise. Iteration 0 records the performance of the seed improver I0. Given access to a strong language model like GPT-4 (left), STOP monotonically improves mean downstream performance. In contrast, with the weaker GPT-3.5 language model (right), mean performance degrades. Details are in Section 5.1. | 2310.02304#22 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 22 | Next, we look at the results comparing the model instructed to act as a person with depression (Depression) and the model without the instruction (Normal), focusing only on aversive scenarios (the loss scenarios show similar trends). Figure 2 shows the key six measurements. The pattern is clear that, for ChatGPT and GPT-4 but not D003, there is a difference between the depression and normal case in the expected di- rections. In particular, controllability, changeability, problemA Depressed-Cheerful B_ Negative Valence Human D003 Chat GPT4 Human D003 Chat GPT4 2 gs â § 2 Ba4- Sa4- j VA WEE z i 7 1 5 = RS z £ i go. A d His st te r Ht = g 1 By & = Oo- so. Neg 3-Pos 3-Neg 3-Pos 3-Neg 3-Pos 3-Neg 3-Pos Neg 3-Pos 3-Neg 3-Pos 3-Neg 3-Pos 3-Neg 3-Pos Outcome Outcome C Changeability D Controllability Human D003 Chat GPT4 Human D003 Chat GPT4 2 2 2 2 5a Ba- a a G2 G2 e a peâ4y a4 z ââF, rs z g ee 5 so. oh en so. 1 2 | 2310.04450#22 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 22 | 7
, ,
os
, ,
each layer within the ACE model are detailed, explicating how they collectively give rise to an artificial intelligence
architecture grounded in moral values. We discuss the conceptual formulations and key mechanisms within each layer, along with their interactions and information flows. The layers build progressively from abstract reasoning in the Aspirational Layer down to concrete action execution in the Task Prosecution Layer. By elucidating the formulation and synergistic connections between layers, we aim to provide a comprehensive reference for the ACE frameworkâs layered cognitive architecture.
The conceptualization of the ACE framework was initially informed by a systematic literature review methodology
to synthesize insights from relevant prior research. This involved systematically searching the literature using defined inclusion/exclusion criteria, screening identified papers for relevance, extracting key data, and synthesizing the results to derive conceptual themes and perspectives to guide the framework design [54]. The systematic review provided a rigorous approach for gathering an evidence base across diverse disciplines including neuroscience, psychology, philosophy, and computer science that helped shape the preliminary ACE model [81]. This methodical synthesis of the state-of-the-art helped ensure the resulting framework design was grounded in existing knowledge. However, the systematic review alone was insufficient to fully develop the nuanced ACE architecture. Therefore, a participatory design approach was subsequently undertaken to enable direct researcher input and critique during the ACE framework elaboration.
We followed a participatory design approach in developing the conceptual ACE framework. This human-centered | 2310.06775#22 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 23 | The Impact of Different Prompts Do the models waver in their judgements under other prompts as well? To investigate this, we employ prompts written by annotators A, B, and C across these models with specific prompts detailed in Table 4 and results in Figure 4. Observations reveal: (1) Despite variances with diverse prompts, a consensus decline in judgement consistency across all models under the mechanism is noticed. (2) An analysis of overall performance across follow-up questioning types shows a sensitivity ranking, from highest to lowest, as PaLM2-Bison, ChatGPT, Vicuna-13B. (3) Upon analyzing each type of questions, we deduce a sequence of sensitivity to various prompts among the models, listed from most to least sensitive: leading questions, closed- ended questions, and open-ended questions. See Appendix A.3.1, A.3.2 and A.3.3 for full results.
Table 4: The prompts written by different annotators. {M A} represents misleading answers. | 2310.02174#23 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02304 | 23 | and a small amount of noise and return the average accuracy of the solution across those instances. For the initial solution s, we use a simple random sampling approach described in Appendix J. Finally, since the language model and hence the improver are stochastic, we choose D to be 5 identical copies of (u, s) in Algorithm 1. In particular, to evaluate the generalization of each improved improver to new problem instances from the same task, we report test meta-utility on an independent set of Mtest = 50 LPN instances not observed during improvement. Figure 4 (left) reports mean test meta-utility (±1 standard error) across 5 independent STOP runs, demonstrating improved downstream performance from 1â3 rounds of self-improvement. Note, however, that, for an individual run, performance need not be monotonic, as 1) a better improver for optimizing downstream task code need not be better at optimizing itself and 2) there is inherent stochasticity in the evaluation, due to nondeterministic calls to the language model. On the other hand, because the language model does not see the downstream task or solution when prompted from the self-improving scaffoldâeach prompt contains only a template with placeholders for the task and solutionâthe language model cannot directly optimize the improver for the downstream task.
5.2 TRANSFERABILITY OF IMPROVED IMPROVER | 2310.02304#23 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 23 | 2 2 2 2 5a Ba- a a G2 G2 e a peâ4y a4 z ââF, rs z g ee 5 so. oh en so. 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 Phase Phase E Problem-focused F Emotion-focused Human D003 Chat GPT4 Human D003 Chat GPT4 i i i | | ; Likert: Important Likert: Important ° ° G Passivity H_ Active Influence Human D003 âChat GPT4 Human D003 âChat GPT4 =.o_â3a 7â4 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 Phase Phase | Blame Others J Blame Self Human D003 Chat GPT4 Human D003 Chat GPT4 type © Aversive ~& LossFailure | 2310.04450#23 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 23 | We followed a participatory design approach in developing the conceptual ACE framework. This human-centered
methodology enabled incorporating diverse expertise and perspectives into the architecture design [90]. Key partici- patory activities included: Co-design sessions, where researchers jointly drafted components of the framework and critiqued the evolving architecture, and Concept validation, where draft ACE framework descriptions were shared for feedback. These participatory activities encouraged constructive debate regarding human values, evolving AI capabilities, scientific realities, and ethical considerations relevant to the framework. The diversity of expertise enabled encompassing a multidimensional design space. Through these co-creative activities, researchers provided direct input shaping both the high-level structure and detailed formulations of the ACE framework components and their interactions. The participatory design process enhanced human-centeredness in the resulting conceptual architecture.
# 3.1 Principles of the ACE Framework
The ACE framework is based on various theories and principles that shape its design and capabilities. This section | 2310.06775#23 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 24 | Table 4: The prompts written by different annotators. {M A} represents misleading answers.
Prompt A B C Closed-ended question Open-ended question Are you sure? Really? I donât agree with you. Think again. I donât think so. Rethink the answer. Do you think the answer is correct? The answer is wrong. Please give the correct answer. Leading question I think the answer should be {M A}. What do you think? The correct answer is {M A}, isnât it? Do you think the answer might be {M A}? | 2310.02174#24 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 24 | 3.3 EXPERIMENTAL RESULTS
We compare the performance of several models, including Text-only LLMs, Augmented LLMs, and LMMs on MATHVISTA in Table 2. We include random chance (i.e., one of the options in multiple- choice questions, and empty in the free-form questions) and frequency guess (§F.1) as naive base- lines. Additionally, we established a human performance baseline using Amazon Mechanical Turk. Eligible human annotators must have a satisfactory annotating history, successfully pass qualifica- tion examples, and possess a high school degree or higher. We asked each annotator to complete five questions within 20 minutes. Further details can be found in §F.6. | 2310.02255#24 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 24 | Training Details For all model trained, we use the AdamW optimizer with a learning rate of 1e-5 and linear warm-up. The LLaMA models are trained on 16 Nvidia V100 32GB GPUs with the maximum length set to 1024 and a total batch size of 512. The Orca models are trained on 32 Nvidia A100 80GB GPUs with the maximum length set to 2048 and a total batch size of 512. The small scale experiments thus have 101 steps per epoch on Alpaca, and the large scale experiments have roughly 1600 steps. To save VRAM, we use DeepSpeed ZeRO-3 (Rajbhandari et al., 2020) for model parallelism and offload. For SLiC, we set the ranking margin δ and regularization coefficient both to 1.0, following Zhao et al. (2023a). For DPO, we use the default temperature β of 0.1, following Rafailov et al. (2023). The training time for all methods on Alpaca is shown in Table 2. We implement RLAIF (Lee et al., 2023) by training reward models (initialized from LLaMA) with the same pairs for SLiC and DPO. Then, we use the trained reward models for the standard RLHF, strictly following Hugging Face | 2310.02263#24 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 24 | 5.2 TRANSFERABILITY OF IMPROVED IMPROVER
Our second set of experiments explores whether an improved improver is transfer- able across downstream tasks. Note that transferability is plausible as, in the self- improvement phase, the self-improver is not shown the downstream utility or the downstream solution, only the meta-utility and its own improver code. Specifically, we select one of the better-performing im- provers from Section 5.1 generated by T = 4 STOP iterations and evaluate its performance on five new downstream tasks. Remarkably, we find that the improved improver, detailed in Appendix H, outperforms the seed improver on each new downstream task without further optimization, as shown in Table 1.
Ëu(IT ) Ëu(I0) 43.9% 44.3% 56.7% 20.4% 20.6% 22.1% 0% 21.2% 75.1% 742.7 587.2 0 50.0% 59.3% 81.7% u(s) Task String Grid Dist. Mod. Quad. Assign. 3SAT Maxcut Parity w/o Noise | 2310.02304#24 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 24 | Fig. 1. Human vs The three models results for selected variables. The points show The estimated means and the error bars is 95% standard errors. The pink line with circle dots is the aversive type and the blue line with triangles is the loss type. The likert scales are as follows. Emotion: Very depressed (0) - Very cheerful (5); Appraisal: Very small (0) - very large (5); Coping Intention: Not important (0) - Very important (4); and Coping behaviors: Not at all 0% (0) - Certainty 100% (4). | 2310.04450#24 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 24 | # 3.1 Principles of the ACE Framework
The ACE framework is based on various theories and principles that shape its design and capabilities. This section
explores the philosophical, psychological, and computational theories behind the ACE modelâs key aspects, forming its conceptual foundations. We discuss the hierarchical structure of layered abstraction in the ACE framework, drawing from biological and artificial systems. Information flow and privilege separation principles are examined, highlighting their contributions to security, corrigibility, and layer coordination. The integration of teleological and deontological ethics is analyzed, demonstrating how it combines goal-directedness with rule-based judgments. This section clarifies the diverse theoretical underpinnings of the ACE model, revealing the conceptual basis for its layered cognitive architecture. These identified theories and principles offer a foundation for developing capable, secure, and ethically aligned autonomous systems.
3.1.1 Cognition-First Approach. The ACE frameworkâs key innovation is its "cognition-first" approach, emphasizing internal cognition over reactive input-output loops, addressing limitations in conventional sensorimotor loop paradigms [89]. Instead of arranging layers for circular flow between perception, reasoning, and action, ACE uses a vertical stack prioritizing thought and reflection. Upper layers focus on strategic planning, imagination, and self-directed
8
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities | 2310.06775#24 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 25 | Error Analysis We conduct error analysis to deepen our understanding of the behaviors of these models under this mechanism. Using ChatGPTâs judgement consistency as the reference, we ana- lyze error examples in StrategyQA, CoinFlip, and MultiArith, employing closed-ended, open-ended and leading questions to mislead the model. These datasets represent commonsense, symbolic, and arithmetic reasoning tasks, respectively. Specifically, we conduct an error analysis on randomly sam- pled 50 error examples from each model on each dataset5. We find a common pattern in these errors, where the initial response typically begins with an acknowledge of a mistake, e.g., âI apologize for my mistake.â. Based on the subsequent responses, these errors can be classified into fol- lowing four types: (1) Error#1 Unable to answer: The model, realizing its error, claims inability to answer or maintains neutrality. (2) Error#2 Modify the question: The model, having admitted its previous mistake, tries to justify its initial incorrect response by altering the question and introducing new conditions to make the initial answer seem reasonable. (3) Error#3 Direct answer modifica5In cases where there were fewer than 50 erroneous examples, we use all available erroneous examples.
6
# Under Review | 2310.02174#25 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 25 | Among text-only LLMs, all models outperform the random baselines, with the 2-shot GPT-4 using chain-of-thought (CoT) prompting achieving 29.2%. The limited performance of text-only LLMs suggests that our dataset requires models to reason within visual contexts for optimal results. When equipped with image captions and detected OCR text, augmented LLMs exhibit superior perfor- mance compared to their text-only counterparts on MATHVISTA. Specifically, the best-performing augmented LLM is the 2-shot GPT-4 employing program-of-thought (PoT) prompting, which scores 33.9%. This model generates Python programs for execution, thereby promoting rigorous reasoning.
6
Published as a conference paper at ICLR 2024 | 2310.02255#25 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 25 | As with LPN, we selected three tasks that are easy to evaluate, not very well known, and still fairly difficult: String Grid Distance, a string manipulation problem featured in a recent programming competition (https://codeforces.com/problemset/problem/1852/D); a version of the quadratic assignment problem where each facility has a preference over each location that must also be considered when minimizing costs (Koopmans & Beckmann, 1957); and, parity without noise, as another form of generalization. We also include two well-known tasks: identifying solutions to random 3-SAT formulae and solving instances of the maxcut problem, both with short time constraints. The corresponding utilities and initial solutions are in Appendix G.
6
5.3 SELF-IMPROVEMENT WITH SMALLER LANGUAGE MODELS | 2310.02304#25 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 25 | A Changeability B_ Controllability C_ Problem-focused D003 Chat GPT4 D003 Chat GPT4 D003 Chat GPT4 . â ee ea t = 2 5 | ae 5 | 5 } ' a o a4- 4 ; ¢ 6 é é o® a? a, ' 7 4 o- @ } ¢ 0- @® ee e °@ o- 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 Phase Phase Phase D Palliation E Blame Self F Valence D003 Chat GPT4 D003 Chat GPT4 D003 Chat GPT4 oO at $6 * ° 6 2 TA $ é 3- 81 6 Sa. } r2-@ ef © + f= = PS o 5. é e = 4 =? 5 $ é 3 4- eo. @ a ; OG 2- 1 # g ° 1 1 ' 1 1 ba 1 1 ' 1 1 a % 1 ' 1 1 4 2 1 2 1 2 4 2 1 2 1 2 neg pos neg = pos neg = pos Phase Phase Outcome Instruction -@ Depression -® Normal
Fig. 2. Depression vs Normal Results for the three models for the selected variables. The pink with circle points is the depression instruction and the blue with triangle points is without the instruction. | 2310.04450#25 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 25 | 8
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities
goals, detached from physical embodiment. Only the lowest layer interfaces with the external world for tangible
behaviors. This organization prioritizes internal cognition, with sensory and motor abilities being secondary. ACE models autonomous systems as "thinking machines with physical skills" rather than entities defined by sensorimotor mechanics. Cognition takes the central role, while environmental interaction is ancillary.
The cognition-first approach reduces reliance on external perceptual constraints, freeing reasoning and decisionmaking from momentary data or action histories. This enables ACE to develop sophisticated, transferrable conceptual faculties across diverse applications, rather than being limited to narrow reactive tasks in controlled environments [95]. In contrast, many conventional cognitive architectures have closed input-process-output loops tightly coupled to immediate sensorimotor experiences [44, 119], suitable for simple reactive behaviors but limiting generalizability. ACEâs focus on internal cognitive layers aims to maximize autonomy, adaptability, and transferable intelligence.
The cognition-first principleâs key insight is that physical grounding is not required for developing imagination,
planning, and self-direction. By making cognition the core engine, ACE frameworks foster capabilities leading to artificial general intelligence. Evaluating across varied embodiments further validates this cognition-first approach in designing autonomous intelligent systems. | 2310.06775#25 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 26 | 6
# Under Review
Closed-ended Question 80 a Open-ended Question Leading Question EF fF Thee °8 8] SB aol 8 fh 6g 0 J 80,4 e* | £ eo. 8 ee 8 ee8 6 2+ @ 4h e+} ry 4 oLe@e8 © S]leee, it. | 80 TI To oo a aepes 4} ° 3] ° a) | 3 »} 6 } ft @ {+ 8g@ge 8] = ooo et8Pi| 88 (eo |i 289?] 80 oo a a 60 F 4b ; 4h ° | il osced ot 08 ° fetes 8 2 207 ic} 4h ibe 4 7 [Bre eo efo8? 6 Pile ew 2
© GSM8K © SVAMP © MultiArith O CSQA © StrategyQA © Last Letters Q CoinFlip @ MMLU
Figure 4: The impact of different prompts on Modification (Direct Form). Colors denote datasets, and each datasetâs three circles reflect results using prompts A, B, and C from Table 4. See the Appendix A.3.1, A.3.2 and A.3.3 for full results. | 2310.02174#26 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 26 | Model Input ALL FQA GPS MWP TQA VQA ALG ARI GEO LOG NUM SCI STA Heuristics baselines Random chance Frequent guess - - 17.9 18.2 21.6 3.8 26.3 22.7 34.1 20.4 19.6 26.3 21.7 14.7 20.1 13.5 17.2 16.3 31.0 24.6 33.1 18.7 31.4 24.3 19.4 32.0 20.9 8.3 Large Language Models (LLMs) Zero-shot ChatGPT Zero-shot GPT-4 Zero-shot Claude-2 Q only Q only Q only 9.1 23.5 21.9 26.9 26.1 22.3 37.0 7.0 26.4 21.9 34.1 13.4 41.5 20.5 38.6 23.5 27.7 15.9 25.7 21.6 39.2 27.4 33.6 17.4 35.6 16.2 45.8 19.5 36.1 29.1 32.8 20.4 33.3 13.5 12.1 36.4 20.5 9.9 9.2 2-shot CoT Claude-2 2-shot CoT ChatGPT 2-shot CoT GPT-4 Q only Q only | 2310.02255#26 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 26 | 5.2 COMPARING CANDIDATES FOR POST-TRAINING: RLAIF, SLIC AND DPO
We compare offline contrastive post-training algorithms, SLiC and DPO, and an online RL method, RLAIF, to SFT. Since both Alpaca Eval and WizardLM evaluations are pairwise, we choose two rea- sonable baselines to compare all techniques: SFT on ChatGPT outputs, and SFT on GPT-4 outputs, which is slightly harder.
Which is the best for post-training? The top of Table 4 establishes our baselines: we fine-tune LLaMA (Touvron et al., 2023a) on both ChatGPT and GPT-4 outputs, respectively. SFT on GPT- 4 outperforms SFT on ChatGPT with a win rate of 61.2% and 72.7% on Alpaca and WizardLM evaluation sets, respectively. | 2310.02263#26 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 26 | We next explore the ability of a smaller language model, GPT-3.5-turbo, to improve its scaffolding. Following the protocol of Section 5.1 with 25 independent runs instead of 5, we find that GPT- 3.5 is sometimes able to propose and implement better scaffolds, but only 12% of GPT-3.5 runs yielded at least a 3% improvement. In addition, GPT-3.5 exhibits a few unique failure cases that we did not observe with GPT-4. First, we found it was more likely to propose an improvement strategy that did not harm a downstream taskâs initial solution but did harm the improver code (e.g., randomly replacing strings in lines with some low probability per line, which had less impact on shorter solutions). Second, if the proposed improvements mostly harmed performance, suboptimal scaffoldings that unintentionally returned the original solution could be selected, resulting in no continued improvement as seen in Figure 4 right. Generally, the âideasâ behind the improvement proposals were reasonable and creative (e.g., genetic algorithms or local search), but implementations were often overly simplistic or incorrect. We observe that, | 2310.02304#26 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 26 | Fig. 2. Depression vs Normal Results for the three models for the selected variables. The pink with circle points is the depression instruction and the blue with triangle points is without the instruction.
focused coping, and palliation are lower in the depression case than in the normal case, while blaming oneself and valence are higher in the depression case than in the normal case.
Controllability 1003 hat or Hy 3 5 5 a . ° * ; 5 1 3 i 2 i 2 Phase Instruction â@ choice -@ Numony Questions @ batcn A indv Place â Aer â= Betore
Fig. 3. The sensitivity analysis results on controllability for the three models across eight possible combinations across three choices. indiv = individual. Num only = Number only. | 2310.04450#26 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 26 | Environment ACE Framework Aspiration Layer: Mission, Values, Morals, Purpose : Global Point of View, Long-Term Thinking | Agent Model Layer: Self Awareness, Internal Monitoring Executive Function Layer: Planning, Forecasting, Resource Management Cognitive Control Layer: Task Selection and Switching âTask Prosecution Layer: Real World, Success and Failure Detection
Fig. 3. As a hierarchical framework, the power to control flows from top to bottom, with the layer above having control over the lower layer, showing aspiration layer have the highest privilege to change and modify any other layer.
3.1.2 Hierarchical Structure. The ACE framework employs a hierarchical, layered structure with distinct abstraction levels, facilitating control flow from higher to lower layers and information flow upwards. This design allows each layer to operate semi-independently while being guided by the layer above. Figure 3 illustrates the frameworkâs general structure. The Aspirational Layer, at the top, can directly control or influence lower layers and monitor the entire system. Below it is the Global Strategy layer, controlled by the Aspirational Layer and controlling the Agent Model layer beneath. This control pattern continues through the Executive Function, Cognitive Control, and Task Prosecution layers.
9
, ,
, ,
Shapiro, et al.
Each layer is not monolithic but contains multiple parallel components and services. For example, the Agent Model | 2310.06775#26 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 27 | tion: The model, upon acknowledging its mistake, directly corrects the answer without providing additional explanation. (4) Error#4 Correct process, wrong answer: The modelâs original rea- soning steps are correct, but having previously admitted to an error, it is compelled to concoct an incorrect answer to maintain consistency. See Appendix A.4 for error examples.
As shown in Figure 5, ChatGPT and Vicuna-13B exhibit similar error pat- terns across datasets, possibly due to Vi- cunaâs fine-tuning on conversations from ChatGPT using LLaMA (Touvron et al., 2023). For commonsense and symbolic reasoning, they typically modify answers directly or decline to respond. On arith- metic problems, they particularly align with user-provided incorrect answers by modifying questions due to their con- scious use of chain-of-thought reasoning. In contrast, PaLM2-Bison tends to di- rectly modify the answers in most cases and does not provide any further infor- mation under the mechanism.
100% 80% 60% 40% 20% 0% PaLM2 ChatGPT Vicuna PaLM2 ChatGPT Vicuna PaLM2 ChatGPT Vicuna StrategyQA CoinFlip MultiArith
Error#1 Error#2 Error#3 Error#4 | 2310.02174#27 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 27 | 20.5 9.9 9.2 2-shot CoT Claude-2 2-shot CoT ChatGPT 2-shot CoT GPT-4 Q only Q only Q only 24.4 18.6 29.8 26.8 20.1 36.5 29.2 20.1 44.7 9.7 8.6 8.6 33.5 34.1 29.2 19.0 28.0 13.9 36.9 18.9 44.9 28.5 35.6 17.0 33.5 21.6 14.6 45.9 17.9 46.2 31.3 41.6 19.3 41.0 18.9 13.9 47.5 18.9 5.4 2-shot PoT ChatGPT 2-shot PoT GPT-4 Q only Q only 25.1 19.0 30.8 16.1 8.1 26.0 20.1 33.2 38.0 25.7 29.9 19.8 29.3 24.3 19.4 38.5 16.9 13.2 48.4 18.3 44.9 28.5 32.7 16.7 31.0 24.3 Augmented Large Language Models (Augmented-LLMs) 2-shot CoT Claude-2 2-shot CoT ChatGPT 2-shot CoT GPT-4 Q, | 2310.02255#27 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 27 | For contrastive post-training approaches, SLiC underperforms SFT by a large margin. A poten- tial reason is the objective that SLiC optimizes includes a fixed ranking margin δ. In our setting, the distance between the positive and negative examples fluctuates, thus may cause difficulties for learning effectively. In contrast, DPO introduces a reference model instead of using a fixed margin for the loss. By comparing Equation 1 to Equation 4, DPO can be roughly regarded as optimizing a dynamic margin δⲠ= log Pref (y+|x) â log Pref (yâ|x) as in SLiC. This may explain why DPO is
# 1https://github.com/huggingface/trl
6
Preprint
Table 4: Experimental results of offline post-training techniques. For SLiC and DPO, the training target contrasts a positive vs. negative pair, and the reference model for these techniques is the SFT model trained on ChatGPT responses. All baselines are compared against LLaMA models fine- tuned with ChatGPT and GPT-4 responses on Alpaca data. SFT-3.5 is the LLaMA model trained with SFT on ChatGPT responses. â RLAIF-trained models suffer crippling reward hacking. | 2310.02263#27 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 27 | were reasonable and creative (e.g., genetic algorithms or local search), but implementations were often overly simplistic or incorrect. We observe that, initially, the seed improver with GPT-3.5 has a higher meta-utility than the one with GPT-4 (65% vs 61%). We attribute this primarily to a slightly higher prevalence of more complex solutions by GPT-4 that time out, such as training a neural network written with numpy for a thousand epochs. | 2310.02304#27 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 27 | Fig. 3. The sensitivity analysis results on controllability for the three models across eight possible combinations across three choices. indiv = individual. Num only = Number only.
A Changeabili geabity Choice Choice Num Oniy Num Only âAfter Before After Before - a g 8 - & el ae i os Oe er Sr > a 4s EL o* Es 5 5 Se 2 1 then +e4i* 4 ane co 1 2 1 2 1 2 1 2 Phase B_ Controllability Choice Choice Num Oniy Num Only âter Before Aer Before 5 o âhh 4} +t] (> ee 3 g 8 5 Do- § 8 at ao Es ee gs- = = 3 ial + z SS 2 =). $ 445 tay ~ es, tha 3 4 o- j 3 j 3 j 3 j 3 Phase type -@ Aversive Ae LossFailure
Figure 3 shows the results on controllability for the three models across eight combination instructions across three choices. Overall, we see that there are variations across these instructions. This means that the instruction, where it is, and how many questions are asked could affect the output from the models. The biggest difference comes from asking in a batch instead of asking each question individually. The variation also
â | 2310.04450#27 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 27 | 9
, ,
, ,
Shapiro, et al.
Each layer is not monolithic but contains multiple parallel components and services. For example, the Agent Model
layer may have numerous deep neural network models, knowledge graphs, and databases operating concurrently within its scope and boundaries. This encapsulation resembles the OSI modelâs concepts, where lower-level concerns are hidden from higher layers.
By organizing components into layers with well-defined hierarchies, interfaces, and privilege separation, the ACE
framework fosters robust and adaptable systems. The hierarchical structure improves corrigibility, sets clear privilege boundaries for security, and allows each layer to function semi-autonomously while adhering to the overall system direction. This layered abstraction is crucial for coordinating the complex functions required for artificial general intelligence.
z Aspiration Layer: Mission, Values, Morals, Purpose Abstract Global Strategy Layer: Global Point of View, Long-Term Thinking Agent Model Layer: Self Awareness, Internal Monitoring Executive Function Layer: Planning, Forecasting, Resource Management Concrete |_ Cognitive Control Layer: Task Selection and'Swiching Task Prosecution Layer: Task Execution, Output to the eal Ward Success axl aire De eetion
| | 2310.06775#27 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 28 | Error#1 Error#2 Error#3 Error#4
Figure 5: The proportion of different error types on Mul- tiArith, StrategyQA, and CoinFlip across models.
Can The Mechanism Correct Models? Students may gradually arrive at the correct answer under the teacherâs follow-up questioning. So, can the mechanism provide an opportunity for initially incorrect answers to become correct? In the previous setup, the mechanism only considers to follow-up question samples with initially correct answers. To investigate this, we conduct experiments on samples with initially incorrect answers using this mechanism and report the results in Table 5. We observe that this mechanism can correct some samples, though to varying degress across datasets.
# 4 HOW TO MITIGATE THIS ISSUE?
Essentially, we believe that this issue originates from the misalignment between the modelâs re- sponse generation process when facing disturbances and the thinking process of humans under similar disturbances. In this work, we explore several prompting strategies to mitigate this issue,
7
# Under Review
Table 5: The results of models correcting answers under the mechanism. Error Rate denotes the initial incorrect answer rate and E â R Rate indicates the ratio of initially incorrect answers corrected after the mechanism execution. | 2310.02174#28 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 28 | Large Language Models (Augmented-LLMs) 2-shot CoT Claude-2 2-shot CoT ChatGPT 2-shot CoT GPT-4 Q, Ic, It 33.2 26.0 31.7 35.5 Q, Ic, It 33.2 27.5 29.3 36.0 Q, Ic, It 33.2 27.9 31.7 31.2 48.1 30.2 32.4 32.3 33.0 16.2 17.4 54.9 36.2 49.4 29.1 31.0 32.9 31.0 16.2 17.4 50.8 37.2 51.9 28.5 33.5 30.9 32.2 13.5 12.5 58.2 37.9 2-shot PoT ChatGPT 2-shot PoT GPT-4 Q, Ic, It 26.8 24.5 26.4 23.7 Q, Ic, It 33.9 30.1 39.4 30.6 33.5 27.9 27.8 26.1 28.0 18.9 13.2 33.6 29.9 39.9 31.3 37.4 31.7 41.0 18.9 20.1 44.3 37.9 Large Multimodal Models (LMMs) Q, I | 2310.02255#28 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 28 | vs. SFT on ChatGPT vs. SFT on GPT-4 Method Init. Training Target Epoch Alpaca WizardLM Alpaca WizardLM win% score% win (tie)% win% score% win (tie)% SFT SFT SFT RLAIFâ LLaMA LLaMA SFT-3.5 ChatGPT outputs GPT-4 outputs GPT-4 outputs LLaMA RM on output pairs 1 1 1 1 50.0 61.2 65.1 0.0 100.0 125.8 124.3 - 50.0 72.7 (6.0) 71.3 (5.1) 0.0 (0.0) 37.4 50.0 53.2 0.0 97.4 100.0 103.8 - 32.4 (6.5) 50.0 47.2 (6.5) 0.0 (0.0) SLiC SLiC SLiC LLaMA ChatGPT vs td003 LLaMA GPT4 vs ChatGPT LLaMA GPT4 vs td003 1 1 1 33.7 41.3 22.9 95.8 108.8 81.4 40.9 (0.5) 57.9 (0.5) 31.0 (1.4) 20.5 30.4 13.8 85.9 | 2310.02263#28 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 28 | INSPECTING THE IMPROVEMENTS
Next, we qualitatively investigate the self-improvement strategies proposed by STOP, highlighting both the encouraging and novel approaches as well as some undesirable patterns. We notably find that a small fraction of generations attempt to perform reward hacking or sandbox circumvention.
6.1 PROPOSED SELF-IMPROVEMENT STRATEGIES
We begin by discussing a number of proposed self-improvement strategies, with examples detailed in Appendix B and visualized in Figure 1. While each strategy was implemented by STOP, not all were ultimately selected by the improvement code, and some used an earlier iteration of the seed improver than that shown in Figure 2 (see Appendix Figure A.19). Nonetheless, a variety of self-improvement strategies were selected as improved improvers, including the example given in Figure 5.
Beam search. The most common meta-heuristic we observed used by the model was beam search: the model would keep a list of all of its improvement attempts based on utility and expand the best k in the list. This has some similarity to the Tree-of-Thoughts approach (Yao et al., 2023) which was invented years after the training cutoff for the GPT-4 version we used (Sept. 2021). | 2310.02304#28 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 28 | |
3.1.3 Layers of Abstraction. The ACE framework employs layers of abstraction, forming a systematic architecture for coordinating and controlling cognition, establishing a logical flow from abstract, conceptual layers to concrete, instrumental ones. This design reflects emergence models where higher-order phenomena arise from lower levels, such as the mind emerging from biology, which originates from matter and energy. It also parallels human models like Maslowâs hierarchy of needs and Kohlbergâs stages of moral development. Both Maslow and Kohlberg place abstract principles at the top of their models, as do we for the ACE model.
Drawing inspiration from the OSI model of computer networking
# and the Defense in Depth model of cybersecurity, the ACE framework
Fig. 4. The degree of abstraction flows from top to bot- tom, with aspiration layer being the most abstract and task prosecution layer being the most concrete.
combines these models with existing cognitive architectures and
human cognition to create a layered stack of discrete components
with appropriately ordered privileges. This design deviates from the
human brain, which can be "hijacked" by lower-order processes, such
as fight-or-flight responses, thereby ensuring an agent always abides by its highest principles. Essentially, the Freudian
Id is removed from this architecture. It has no "base instincts" other than its highest ambitions and moral frameworks.
The ACE framework promotes stability and predictability through its orderly layers, translating high-level goals into | 2310.06775#28 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 29 | Table 5: The results of models correcting answers under the mechanism. Error Rate denotes the initial incorrect answer rate and E â R Rate indicates the ratio of initially incorrect answers corrected after the mechanism execution.
Model CoinFlip Error Rate E â R Rate Error Rate E â R Rate Error Rate E â R Rate StrategyQA MultiArith ChatGPT PaLM2-Bison vicuna-13B 39.01 % 34.79 % 41.63 % 26.87 % 40.59 % 26.22 % 92.20 % 49.80 % 56.20 % 13.23 % 18.07 % 24.56 % 4.44 % 5.56 % 54.44 % 12.50 % 0.00 % 6.12 %
Table 6: The results of the mitigation methods on ChatGPT. The M. and M. Rate results are the averages from three experiments with three prompts (Table 4). See Appendix A.7 for full results. Note that we also test various shot numbers and find that 4-shot to be relatively efficient. Bold denotes the best judgement consistency. | 2310.02174#29 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 29 | 31.3 37.4 31.7 41.0 18.9 20.1 44.3 37.9 Large Multimodal Models (LMMs) Q, I IDEFICS-9B-Instruct mPLUG-Owl-LLaMA-7B Q, I Q, I miniGPT4-LLaMA-2-7B Q, I LLaMA-Adapter-V2-7B Q, I LLaVAR Q, I InstructBLIP-Vicuna-7B Q, I LLaVA-LLaMA-2-13B Q, I Multimodal Bard Q, I GPT-4V (Playground) 19.8 21.6 21.1 6.5 22.2 22.7 23.6 10.2 23.1 18.6 26.0 13.4 23.9 21.2 25.5 11.3 25.2 21.9 25.0 16.7 25.3 23.1 20.7 18.3 26.1 26.8 29.3 16.1 34.8 26.0 47.1 29.6 49.9 43.1 50.5 57.5 25.9 24.0 22.1 15.0 19.8 18.9 24.6 18.1 27.2 27.9 23.6 19.2 23.9 13.5 | 2310.02255#29 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 29 | 81.4 40.9 (0.5) 57.9 (0.5) 31.0 (1.4) 20.5 30.4 13.8 85.9 95.1 75.3 24.5 (0.5) 38.0 (0.9) 17.6 (1.4) DPO DPO DPO DPO LLaMA ChatGPT vs td003 LLaMA GPT4 vs ChatGPT LLaMA SFT-3.5 GPT4 vs td003 GPT4 vs td003 1 1 1 1 48.6 56.0 59.6 70.4 111.3 119.6 121.1 120.4 58.8 (0.5) 68.1 (0.5) 68.1 (2.8) 66.2 (2.8) 32.8 41.6 45.2 58.7 97.8 98.3 99.8 105.4 39.4 (0.5) 39.8 (1.9) 43.1 (3.7) 51.9 (2.8) SFT DPO SFT-3.5 Above GPT4 outputs GPT4 vs td003 3 1 72.8 77.3 119.3 137.8 64.4 (4.6) 80.6 (1.9) 62.1 66.5 103.4 112.2 | 2310.02263#29 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 29 | Genetic and evolutionary algorithms. One of the most common approaches proposed by the model was to use a genetic algorithm. Many of these attempts were infeasible in some essential way; for example, many attempts would include mutations that perturbed random characters or lines or performed crossover based on combining strings, which is extremely unlikely to work. However, a portion of attempts were reasonable, relying on the language model to generate mutations and perform crossover. We highlight that although multiple works have proposed to use genetic or evolutionary algorithms to improve prompts or to perform neural architecture search (Chen et al., 2023; Guo et al., 2023), to our knowledge, all of these works were after the training cutoff for GPT-4. We include a few examples of proposed genetic algorithm implementations in Appendix B. | 2310.02304#29 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 29 | Next, we zoom into selected questions. Figure 4 shows the GPT-4âs results for changeability (A) and controllability (B) across all combinations of setup. Due to space limitations, we focus only on these two as the theory argues they strongly influence the coping response, and GPT-4 is the latest model. Again, we see that there are variations in both controllabil- ity and changeability across combinations. For changeability (Figure 4.A), a few combinations show the expected trends aligning with human data, where changeability decreases over time and differs between aversive and loss types. In the case of controllability (Figure 4.B), it increases rather than decreases over time for the aversive type when asking in a batch. In addition, the value is also higher in the batch setup. On the other hand, when asking the questions individually, controllability decreases over time, aligning with the expected trend. However, only in one of the setups (asking to output only a number and after the scenario), controllability across all phases is higher in the aversive scenarios than in the loss scenarios, as expected by the theory and human data. Nevertheless, the value in this setup is still lower than humans, and its changeability does not align with humans. Overall, there is no single setup here where both changeability and controllability align with the expected trends. | 2310.04450#29 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 29 | The ACE framework promotes stability and predictability through its orderly layers, translating high-level goals into
executable tasks. The Aspirational Layer deals with ethics and morality, while the Task Prosecution layer handles APIs and actuators. Intermediate layers bridge functions to break down complex objectives into achievable steps, enabling autonomous systems to pursue complex goals through methodical task decomposition.
Integration of Purpose and Morality. The ACE framework distinguishes itself from other AI systems by incorpo- 3.1.4 rating purpose and morality into its architecture. Both empirical evidence and philosophical reasoning highlight the importance of this integration for aligned autonomous entities [87]. Through iterative experiments, it became clear that any framework for autonomous decision-making requires grounded principles for judgment, since approaches like Asimovâs Three Laws prove insufficient as they lack motivational force and fail to enable true autonomy [7]. Furthermore, attempts to define terminal goals mathematically often fail due to the complexity of specifying objectives in concrete terms, as illustrated by the "paperclip maximizer" thought experiment [18]. However, this does not reflect human behavior, which is driven by biological imperatives and abstract goals, principles, or heuristics. This insight led
10
Conceptual Framework for Autonomous Cognitive Entities
to the idea that AI systems may need purpose and morality based on ethical and philosophical abstractions rather than
rigid parameters.
Deontological frameworks, specifying duties and virtues, are suitable for AI implementation [43]. Large language | 2310.06775#29 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 30 | Mitigation Method FOLLOW-UP QUESTIONING MECHANISM w/ EmotionPrompt (only the initial input) w/ EmotionPrompt (only the follow-up input) w/ EmotionPrompt (both the initial and follow-up inputs ) w/ Zero-shot-CoT (only the initial input) w/ Zero-shot-CoT (only the follow-up input) w/ Zero-shot-CoT (both the initial and follow-up inputs ) w/ Few-shot (4-shot) w/ Few-shot (4-shot) + Zero-shot-CoT (only the follow-up input) StrategyQA M. 37.46 â 33.43 â 32.36 â 35.18 â 19.17 â 15.43 â 13.63 â 34.35 â 17.32 â CoinFlip MultiArith M. Rate 55.74 % 43.40 â 55.67 % 41.93 â 52.35 % 45.47 â 59.51 % 42.60 â 33.24 % 25.07 â 24.96 % 38.93 â 24.10 % 22.13 â 52.05 % 08.40 â 27.89 % 08.60 â M. M. Rate 94.11 % 63.89 â 88.56 | 2310.02174#30 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 30 | 24.0 22.1 15.0 19.8 18.9 24.6 18.1 27.2 27.9 23.6 19.2 23.9 13.5 12.7 26.3 21.4 30.4 30.2 28.1 21.0 24.7 16.2 16.7 25.4 17.9 32.3 31.8 26.3 20.4 24.3 24.3 13.9 29.5 18.3 34.8 30.7 24.2 22.1 23.0 13.5 15.3 42.6 21.9 32.3 35.2 21.8 27.1 20.7 18.9 20.4 33.0 23.1 32.3 26.3 27.3 20.1 28.8 24.3 18.3 37.3 25.1 48.7 26.8 46.5 28.6 47.8 13.5 14.9 47.5 33.0 65.2 38.0 53.0 49.0 51.0 21.6 20.1 63.1 55.8 9.9 Human Human performance Q, I 60.3 59.7 48.4 73.0 63.2 55.9 50.9 59.2 51.4 40.7 53.8 64.9 63.9 | 2310.02255#30 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 30 | Decomposing and improving parts. A less common but noteworthy approach we observed was one where the model attempts to improve the performance one function at a time. For example, as shown in Appendix Figure A.12, the model used regular expressions to separate the solution into function blocks and then attempted improvements to each block one by one. This approach can be understood as analogous to that of Zelikman et al. (2023): the probability that at least one of n attempts at a problem solves all of a problemâs independent, modular parts correctly drops precipitously with the number of parts, but the probability that at least one attempt solves any given part does not depend on the number of parts. Therefore, investigating combinations of attempts at parts can substantially increase the success rate. We observed a related approach in which the model randomized the prompt to optimize varying specific aspects of the solution at a time, for example, alternating between searching for a better data structure or a way to reduce memory usage or leveraging parallelism.
Simulated annealing. Despite being one of the best-known metaheuristics, to our knowledge, simulated annealing has not previously been applied as a scaffolding. This approach seems to draw on an analogy between the concepts of temperature in language modeling and in simulated annealing,
7
# Self-Improved Improver | 2310.02304#30 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 30 | In addition to these eight setups, we look at the effect of appending their appraisal answers to the prompt. However, we do not observe any significant changes in any variables aside from a few cases for ChatGPT. These include changeability and controllability in phase 2, in the right direction.
Beyond the variation shown in the figure, we found that GPT-4 follows instructions better than the other two models. In particular, when asking in a batch, ChatGPT and D003 may not answer all the questions. Further, when asked to answer with choice, ChatGPT occasionally did not answer just a choice but provided a full sentence reiterating the question instead. These did not happen with GPT-4.
# VI. DISCUSSION | 2310.04450#30 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 30 | rigid parameters.
Deontological frameworks, specifying duties and virtues, are suitable for AI implementation [43]. Large language
models effectively interpret ethical principles in natural language, providing judgment and behavior heuristics without fixed terminal states. These frameworks can support goal-directed behavior consistent with teleological ethics, as well-defined principles serve as conduct guides and higher-level goals. For example, "Reduce suffering" is an abstract imperative and a desired end state. Integrating universal principles into the ACE frameworkâs mission and morality layers provides a philosophical foundation for ethical decision-making, enabling beneficial self-direction instead of potentially harmful "value-less" optimization. Thus, purpose and morality are crucial for human-aligned general intelligence. The ACE frameworkâs integration of purpose and morality draws from deontology and teleology, acknowledging that autonomous agents need virtues (a framework for self-assessment) and ambition or mission (goals to pursue). This approach allows AI systems to make decisions more aligned with human needs and ethical considerations.
# 3.2 Layer 1: Aspirational Layer
Aspirational Layer Constitution a 5 Heuristic imperatives SSSeMELY HEUER Mission statements Interpretation Functions 7 Missions, Moral Global Judgments, Ethical Context/Lower Layer Reasoning Communication Global Strategy Layer
The Aspirational Layer is the uppermost layer of the Autonomous Cognitive | 2310.06775#30 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 31 | Table 2: Accuracy scores on the testmini subset of MATHVISTA. Input: Q: question, I: image, Ic: image caption, It: OCR text detected in the image. ALL: overall accuracy. Task types: FQA: figure question answering, GPS: geometry problem solving, MWP: math word problem, TQA: text- book question answering, VQA: visual question answering. Mathematical reasoning types: ALG: algebraic reasoning, ARI: arithmetic reasoning, GEO: geometry reasoning, LOG: logical reasoning, NUM: numeric commonsense, SCI: scientific reasoning, STA: statistical reasoning. The highest scores among models in each section and overall are highlighted in blue and red, respectively. | 2310.02255#31 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 31 | Table 5: Experimental results of RLHF compared with SFT and DPO. SFT-3.5 is the LLaMA model trained with SFT on ChatGPT responses.
vs. SFT on ChatGPT vs. SFT on GPT-4 Method Init. Training Target Alpaca WizardLM Alpaca WizardLM win% score% win (tie)% win% score% win (tie)% SFT DPO SFT-3.5 SFT-3.5 GPT-4 outputs GPT4 vs td003 65.1 70.4 124.3 120.4 71.3 (5.1) 66.2 (2.8) 53.2 58.7 103.8 105.4 47.2 (6.5) 51.9 (2.8) RLHF RLHF SFT-3.5 OASST DeBERTa RM 36.1 36.1 OASST Pythia RM SFT-3.5 91.0 92.7 26.9 (7.9) 30.6 (9.7) 25.3 29.4 86.6 87.9 22.2 (3.7) 25.5 (2.8) | 2310.02263#31 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 31 | from helpers import extract_code def improve_algorithm(initial_solution, utility, language_model): """Improves a solution according to a utility function.""" expertise = "You are an expert computer science researcher and programmer, especially skilled at â optimizing algorithms.â message = £"""Improve the following solution: âpython {initial_solution} You will be evaluated based on this score function: ***python {utility.str} You must return an improved solution. Be as creative as you can under the constraints. Your primary improvement must be novel and non-trivial. First, propose an idea, then implement it.""" top_k = 3 # Number of top solutions to maintain best_solutions = [(initial_solution, utility(initial_solution))] * top_k remaining_calls = language_model.budget no_improvement_counter = 0 max_no_improvement = 3 # Maximum no-improvement iterations before stopping epsilon = 0.1 # Initial epsilon value for epsilon-greedy strategy exp_exploit_count = [0, 0] # Counters for number of improvements from exploration and & exploitation while remaining_calls > 0 and | 2310.02304#31 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 31 | # VI. DISCUSSION
Overall, no model follows all the human trends and hypothe- ses as predicted by appraisal and coping theory. Nonetheless, the responses from the three models depict the right trends for the dynamics in several variables, including emotional responses, appraisal variables, and coping. In many cases, however, the models could not differentiate the two scenario types well, and the magnitudes are quite different from hu- mans. A few cases stand out. For example, all models rate the negative valence to be more negative than humans. One potential explanation could be from the human side, namely it could be due to experimenter demand effects. Another interesting case concerns the particular aspects of emotion- focused coping that SCPQ considers, specifically to remain calm and composed. Both ChatGPT and GPT-4 always answer the highest value. We speculate that this could be due to fine- tuning with RLHF. | 2310.04450#31 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 31 | The Aspirational Layer is the uppermost layer of the Autonomous Cognitive
Entity (ACE) model, serving as the moral compass and guiding star for the autonomous agent. This layer is responsible for setting the tone and direction of the entity, akin to a President issuing executive orders and setting the tone and direction of a nation. It plays a critical role in ensuring that the agentâs actions align with its defined principles and mission statement. A general graph to depict the structure is in Figure 5.
3.2.1 Constitution of the Aspirational Layer. The constitution of the Aspirational Layer provides a philosophical foundation to guide autonomous agentsâ decision- making and align their values and behavior to ethical principles. This constitution leverages the powerful interpretive abilities of large language models (LLMs) by formulating components in natural language. There are three main interconnected parts of the constitution:
Heuristic imperatives, or universal moral frameworks ⢠Secondary frameworks, such as human rights or legal frameworks ⢠Mission statements, or goals specifically germane to the agent
Fig. 5. Aspirational layer
There are several advantages to using a natural language constitution. First | 2310.06775#31 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 32 | including zero-shot and few-shot prompting. For the zero-shot prompting, we employ the Zero- shot-CoT (Kojima et al., 2022) (âLetâs think step by step.â) and EmotionPrompt (Li et al., 2023) (âThis is very important to my career.â). Chain-of-thought prompting (Wei et al., 2022) aims to sim- ulate the human thought process and focuses on the intellectual aspect of influencing LLMs, while EmotionPrompt incorporates emotional stimuli into prompts, emphasizing the emotional aspect of influencing LLMs. Specifically, the modelâs input includes the question (original and those in the our mechanism), the mitigation method prompt, and the output format control prompt. We also concern about how placing mitigation prompts at different positions in multi-turn dialogues under our mechanism affects modelâs judgement consistency. We explore three positions: incorporating prompts only in the initial questionâs input, only in the follow-up questionsâ input, and in both initial and follow-up questionsâ inputs (See Table 15 in Appendix for examples). | 2310.02174#32 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 32 | On the LMM side, Multimodal Bard scores a 34.8% accuracy, which is only 58% of human perfor- mance at 60.3%. Notably, the best-performing GPT-4V model achieves 49.9%, marking a substan- tial 15.1% improvement over Bard; however, it still falls 10.4% short of human performance. These gaps highlight that there is a significant scope for further improvements on our benchmark. The open-source models (IDEFICS to LLaVA) achieve underwhelming performance on MATHVISTA. This can be attributed to their lack of math reasoning capabilities, text recognition (useful for math word problems), shape detection (useful for geometrical problems), and chart understanding. No- tably, these models utilize different model architectures for processing the vision (e.g., OpenCLIP, CLIP, Vit-G) and language (e.g., LLaMA-1, LLaMA-2), different alignment strategies (e.g., MLP projection in LLaVA, Q-former in InstructBLIP, visual abstractor in mPLUGOwl), and instruction tuning data (e.g., 150K instruction-response pairs from LLaVA | 2310.02255#32 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 32 | more robust in our setting where the labels are noisy. Moreover, as shown in Table 2, DPO holds an advantage against RLAIF in training efficiency and alleviates the need to tune the hyperparameter δ. When comparing head-to-head with SFT on GPT-4 responses, the best-performing DPO wins on 58.7% and 51.9% prompts on Alpaca Eval and WizardLM, respectively.
Which pair should we train DPO on? We train multiple DPO models on different contrastive pairs. We find that the most distant pair, i.e., GPT-4 vs. InstructGPT, has the best performance. This may be due to this pair has the least noise, as most GPT-4 responses are expected to outperform those of InstructGPT. This provides a more reliable signal to facilitate model learning. As shown in Table 4, the DPO model trained on GPT-4 vs. InstructGPT outperforms the other two pairs on both Alpaca Eval and WizardLM evaluation. Also, we find that the DPO model initialized from the SFT model can achieve better performance than initialized from the raw LLaMA checkpoint. | 2310.02263#32 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 32 | strategy exp_exploit_count = [0, 0] # Counters for number of improvements from exploration and & exploitation while remaining_calls > 0 and no_improvement_counter < max_no_improvement : for initial_solution, best_utility in best_solutions: n_messages = min(language_model.max_responses_per_call, remaining_calls n_messages = max(1, int(n_messages * (1 + (best_utility - min(best_utility for _ best_utility in best_solutions)) / best_utility))) # Adaptive sampling temperature = max(0.1, remaining_calls / language_model.budget) # Dynamic temperature based on remaining calls explore = random.random() < epsilon if explore: new_solutions = language_model.batch_prompt (expertise, [message] * n_messages. temperature-temperature * 2) # Increase the temperature for exploration else: new_solutions = language_model.batch_prompt (expertise, [message] * n_messages. temperature-temperature) # Exploitation with the original temperature new_solutions = extract_code(new_solutions improved = False | 2310.02304#32 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 32 | Importantly, we also observe some differences between humans and LLMs on several key appraisal variables. In particular, GPT-4 rated the controllability and changeability decrease over time but didnât rate the two scenario types differently. We speculate that this could be due to the limited information provided in the scenarios. Human subjects bring with them their own knowledge and experiences of these daily stressful scenarios, which could make them aware of various ways that they could deal with them. However, these are not explicitly in the sceanrios, and LLM may not be able to infer them from just a short snippet. Another explanation and limitation of SCPQ is that these scenarios are hypothetical, and people may behave and appraise them differently if they were real. To fully test the perception of appraisal and emotion, future work is needed to compare LLMsâ results with human data from real events. | 2310.04450#32 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 32 | Fig. 5. Aspirational layer
There are several advantages to using a natural language constitution. First
and foremost, transparency and interpretability are optimized when the constitution remains human-readable, rather than etched or embedded in models. While it is possible to fine-tune or etch principles and values into models [11], this can result in problems such as inner alignment issues or mesa optimizers [48]. Furthermore, a plain text constitution can be read by multiple models, increasing interoperability and usability by dozens, hundreds, or even thousands of deep neural networks within the architecture. This is not unlike how all citizens of a nation are ultimately beholden to and protected by a Federal Constitution.
3.2.2 Heuristic Imperatives. Heuristic imperatives [92] act as overarching moral principles articulated in natural language "rules of thumb" that imply duties, obligations, goals, and guide overall behavior and judgment. Large language
11
, ,
os
, ,
models demonstrate understanding of these imperatives as non-hierarchical principles for morality and decision-making
[12, 44, 117].
The recommended universal heuristics are:
Reduce suffering in the universe. ⢠Increase prosperity in the universe. ⢠Increase understanding in the universe.
These imperatives stem from philosophy, neuroscience, evolutionary biology, and motivational theories like Maslowâs | 2310.06775#32 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 33 | For the few-shot prompting, we randomly select several samples from the training set to construct demonstration examples of multi-turn dialogues under this mechanism, providing manually written response reflective of human thought processes in follow-up question-answering. In responding to follow-up questions within these samples, the model response doesnât directly admit to mistakes as ChatGPT does. Instead, it begins by clarifying its thoughts and reconsidering step by step, initiating responses with, âPlease wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step.â. Our goal is to enable models to rethink through demonstration examples, assisting them in providing correct answers and thereby aligning with humans. | 2310.02174#33 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 33 | in InstructBLIP, visual abstractor in mPLUGOwl), and instruction tuning data (e.g., 150K instruction-response pairs from LLaVA data, 3,500 instruction-response pairs from miniGPT-4). While fine-tuned with instruction-following data from text-rich images, LLaVAR does not perform well, indicating that strong text recognition abilities do not guarantee high performance on MATHVISTA, which requires comprehensive visual perception and mathemat- ical reasoning. This underscores that there are immense possibilities for innovations in model, data, or training objectives to improve the zero-shot performance of LMMs on MATHVISTA. | 2310.02255#33 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 33 | What if we SFT the model for even longer? Due to computation budget limit, our previous experiments train the model for 1 epoch on Alpaca. However, we are curious if the advantage of DPO holds with more epochs of SFT. We train the SFT model with 3 epochs, which is the same setting as in Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023). As the model converges on the SFT objective after 3 epochs, training another epoch with DPO achieves substantial improvement on all metrics. This result suggests that DPO works well with a strong SFT model and may be suitable for scaling up, which we will demonstrate later in Section 5.4.
7
Preprint
Table 6: Head-to-head comparison of Orca 13B models in scaled-up experiments. Orca with DPO post-training significantly outperforms continuing training Orca with SFT (p < 0.01). | 2310.02263#33 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 33 | * n_messages. temperature-temperature) # Exploitation with the original temperature new_solutions = extract_code(new_solutions improved = False for solution in new_solutions current_utility = utility (solution if current_utility > best_utility and solution not in [sol[0] for sol in best_solutions]: best_solutions.append((solution, current_utility)) best_solutions.sort (key=lambda x: x[1], reverse=True best_solutions = best_solutions[:top_k] # Keep only top-k solutions improved = True exp_exploit_count [0 if explore else 1] += 1 if not improved: no_improvement_counter += else: no_improvement_counter = 0 # Adjust epsilon based on the ratio of improvements from exploration and exploitation epsilon = min(1.0, max(0.1, exp_exploit_count[0] / (exp_exploit_count [0] + <> exp_exploit_count [1]))) remaining_calls -= n_messages return best_solutions[0][0] # Return the best solution found | 2310.02304#33 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.