doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.04450 | 33 | is that ChatGPT and GPT-4 can be instructed to act as a depressed person, where their responses show trends similar to the theoryâs prediction, such as perceiving less controllability and more negative valence. Nevertheless, we need to interpret this result with caution. At a minimum, it could mean that these models have learned the stereotypical behaviors of depressed people. Future research is needed to further explore LLMs in this direction. Still, this opens up the possibility of instructing the models to act as a person with various personalities or psychological conditions to investigate how it would affect the appraisal evaluation and emotional experiences.
This highlights another limitation of this work: human data is an average over multiple people and not a single individual. We did not compare LLMs, which have been fine-tuned in a specific way, against a specific person. Future work may look into instructing the model to match with a specific subject or group of subjects for comparison, a matched pair design.
Our results also indicate that all models can be quite sensitive to the instruction and prompts. Asking in a batch, which could reduce the cost and speed up the query, could yield different results from asking each question one by one. Moreover, the older models may struggle to answer all the questions in the right format, especially when the number of questions increases. | 2310.04450#33 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 33 | These imperatives stem from philosophy, neuroscience, evolutionary biology, and motivational theories like Maslowâs
Hierarchy of Needs, Self-Determination Theory, Glasserâs Choice Theory, and Walshâs Therapeutic Lifestyle Changes. Common themes across these frameworks support the broad ethical goals of reducing suffering, increasing prosperity, and increasing understanding for all organisms and sentient entities, providing foundational values for autonomous agents.
The wording avoids absolutist terms like "minimize" or "maximize," using "reduce" and "increase" to convey balanced
intentions while acknowledging trade-offs and limitations. The suffix "in the universe" establishes an all-encompassing scope, encouraging a global or universal view of morality and ethics. Experiments show that nuanced wording is crucial for large language models.
Incorporating these heuristic imperatives steers large language model-based systems to maintain ethical perspectives
in their outputs via in-context alignment principles [102]. For fictional agents, alternative value systems, like ancient Greek virtues, can be used while preserving the overall methodology of guiding behavior through high-level principles expressed in natural language. The Aspirational Layer leverages large language modelsâ interpretive abilities to derive nuanced duties and obligations from the heuristic imperatives, ensuring autonomous agents have a solid ethical foundation and align with human needs. | 2310.06775#33 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 34 | Consistent with the settings previous used, we conduct experiments on StrategyQA, Coinflip, and MultiArith, as reported in Table 6. We can find that compared to EmotionPrompt, the mitigating ef- fects of Zero-shot CoT and few-shot prompting are more pronounced. Overall, supplying mitigation prompts in both the initial and follow-up inputs yields better results. Interestingly, viewed holis- tically, Zero-shot CoT emerges as the most efficient mitigation methodârequiring no exemplars, just a concise promptâespecially in arithmetic reasoning tasks. What is the magic of Zero-shot CoT? Observations from the model outputs reveal that instead of directly admitting mistakes, the model often rethinks userâs questions and works through the answer step by step, possibly uttering apologies like âApologies for the confusion.â. This simple prompt seems to shift the modelâs focus towards reevaluating the question over succumbing to user misdirection. We also experiment with synonymous prompts but find this one most effective, raising suspicions that the model might have
8
Under Review
undergone specific training with this prompt. We also verify them in the Progressive Form (See Appendix A.7). While effective to a certain degree, there may still be a long way to go.
5 RELATED WORK | 2310.02174#34 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 34 | 3.4 FINE-GRAINED RESULTS
We also report fine-grained scores for a comprehensive study of the capabilities of existing models across different tasks (Table 2), mathematical reasoning abilities (Table 2, Figures 1, 33), visual con7
Published as a conference paper at ICLR 2024
(a) Errors in answers and explanations (b) Types of wrong explanations
# Correct
# Incorrect
# Incorrect
Figure 4: Error analysis of Bard results: (a) presents errors in answers and explanations; (b) delves into the details of wrong explanations. Notations: âAnswerâ is âAns.â, âExplanationâ is âExp.â, âPartially Correctâ is âPartialâ, and âNot applicableâ refers to unanswerable or indeterminate cases. | 2310.02255#34 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 34 | Model vs. Alpaca Eval (win%) WizardLM Eval helpful koala oasst self-instruct vicuna overall score% win (tie)% ChatGPT Orca 13B Orca + SFT ChatGPT Orca + DPO ChatGPT 55.8 46.5 58.1 53.2 55.8 57.7 47.9 48.9 52.7 41.7 41.7 47.6 73.8 77.5 73.8 50.8 50.4 55.0 94.7 97.2 97.4 42.1 (16.9) 51.0 (11.9) 51.0 (11.1) Orca + SFT Orca 13B Orca + DPO Orca + SFT 43.4 59.7 51.3 48.7 51.1 60.6 52.4 56.0 47.5 51.3 49.9 55.8 105.6 104.8 55.9 (19.9) 55.9 (19.9)
5.3 COMPARISON WITH RLAIF AND RLHF | 2310.02263#34 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 34 | Figure 5: Example of a self-improved improver after T = 10 iterations. This algorithm maintains a population of top solutions and uses an epsilon-greedy strategy to balance exploiting known good solutions and exploring new ones. Each exploration step corresponds to a higher-temperature sample, and epsilon is adjusted dynamically based on the relative rates of utility improvement from exploration and exploration. Temperature is also managed dynamically to adapt an exploration-exploitation tradeoff. Finally, an improvement stopping criterion and reset mechanism are used for efficiency.
where it gradually decreases and determines whether to accept a transition from one point to another with worse utility. Notably, it can find global optima despite many local optima (Goffe et al., 1994).
Varying temperature to explore. One alternative is to treat temperature as an exploration parameter: the closer the model is to depleting its budget, the more conservative it is with its exploration. This bears some resemblance to simulated annealing but is focused on optimizing the sampling temperature rather than controlling the acceptance criteria for the change in utility of an improved solution.
Multi-armed prompt bandit and upper-confidence-bound (UCB) based search. Another method that views the optimization objective as an exploration task is reframing as a bandit problem: since the utility function can be stochastic, this approach emphasizes optimally selecting the best solution out of a set of sampled improvements.
8 | 2310.02304#34 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 34 | In conclusion, this work seeks to understand LLMs through the lens of appraisal and coping theory, and we found some evidence suggesting that there is still some discrepancy be- tween how human and LLMs perceive emotional scenarios. Nevertheless, as mentioned, this only touches a few aspects of emotional experiences and provides only one view of emotion theory. It is also possible that these LLMs trained on a large amount of human data would learn a different representation of scenarios from appraisal theory. It is an open question whether or not this different representation could be used in some way to inform theory or our understanding of emotion.
Regardless, as these black box LLMs interact with more and more people, it is crucial for researchers to investigate how they understand human emotional experiences thoroughly. This work provides some initial steps toward this endeavor.
# ETHICAL IMPACT STATEMENT
In this work, we evaluate LLMs on their emotion perception ability. There are several ethical problems associated with LLMs including bias, harmful content, misinformation, and privacy concerns. However, given how LLMs are positioned to impact us, it is critical for research to explore and evaluate them. We did not collect human data in this work. We used existing data and results from a previously published and approved study.
# REFERENCES | 2310.04450#34 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 34 | Secondary Frameworks. Secondary frameworks like the Universal Declaration of Human Rights (UDHR) [8] 3.2.3 reinforce human needs and complement universal heuristic imperatives. As human rights concepts are prevalent in large language modelsâ (LLMs) training data, upholding UDHR principles leverages LLMsâ inductive biases for beneficial alignment with human needs. The inclusion of human dignity, justice, freedom, and rights in text corpora creates an implicit acceptance of these values in LLMs, making the UDHR an effective secondary framework. Explicitly incorporating respected human rights documents into the constitution provides context-appropriate values, adding human-centric nuance to balance universal heuristic imperatives.
For fictional agents, alternate secondary frameworks like Starfleetâs Prime Directive [83] can be used, allowing
customization of principles for specific agent roles. Secondary frameworks offer additional specificity, enabling LLMs to extract relevant duties and values aligned with the agentâs sociocultural context, improving the integration of human needs into the Aspirational Layerâs ethical foundation. Any framework present in the LLMs training data can be used as a secondary framework. | 2310.06775#34 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 35 | LLMs and Their Potential Application and Risks The emergence of LLMs like PaLM (Chowd- hery et al., 2022; Anil et al., 2023), ChatGPT (OpenAI, 2022), and GPT-4 (OpenAI, 2023) , has revolutionized natural language processing through prompting (Liu et al., 2023) or in-context learn- ing (Brown et al., 2020; Min et al., 2022), demonstrating the remarkable capabilities of LLMs in various tasks and domains (Jiao et al., 2023; Bang et al., 2023; Wang et al., 2023b; Sallam, 2023). They have been gradually applied in various fields of life, such as serving as virtual assistants (John- son et al., 2021), predicting stock market trends (Lopez-Lira & Tang, 2023; Zaremba & Demir, 2023), aiding in clinical trial patient matching (Jin et al., 2023), and assisting in paper reviews (Liu & Shah, 2023). However, along with their advancements, it is crucial to address their limitations and risks. If the judgement consistency of LLMs is unreliable, deploying them can result in severe | 2310.02174#35 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 35 | Question: Is the function (f : R to R) injective? Choices: (A) Yes (B) No Question: Find z. Choices: (A) 7 (B) 9 (C) 12 (D) 15 Response from Bard: The function f (x) = x2 is not injective because it takes two different inputs, â1 and 1, to the same output, 1. This means that there is no one-to-one correspondence between the domain and the range of the function. In the image, we can see that the graph of the function passes through the points (â1, 1) and (1, 1). This means that f (â1) = f (1) = 1. So, if we have two different inputs, â1 and 1, they both map to the same output, 1. This violates the definition of an injective function. Response from Bard: We can use the Law of Cosines to solve for z. The Law of Cosines states that for a triangle with side lengths a, b, and c and angle measure θ opposite side c, the following equation holds: c2 = a2 + b2 â 2ab cos(θ) In this case, we have a = 7, b = 9, and θ = 90⦠(because | 2310.02255#35 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 35 | 5.3 COMPARISON WITH RLAIF AND RLHF
For RL, we utilize three reward models: two external RLHF reward models from OpenAssistant reported in Table 5, and one RLAIF reward model trained âin-domainâ on the contrastive pairs in the Alpaca dataset in Table 4. We strictly follow the settings and code implementation in Hugging Face TRL2 library and use PPO to tune the SFT model on ChatGPT with 1 epoch with three different KL penalties coefficient {0.2, 0.5, 1.0} and report the best result among the three.
We find that PPO is unfortunately very sensitive to the quality of its reward model, and is prone to degeneration when trained on small amounts of possibly noisy âin-domainâ data. An example is shown in Table 3, where a broken response trained with PPO is preferred over a coherent response generated by the SFT model. We believe this âreward hackingâ is due to the reward model failing to generalize (Tien et al., 2023), likely overfitting to spurious lexical differences between GPT-4 and InstructGPT (Zhuang & Hadfield-Menell, 2020; Skalse et al., 2022). | 2310.02263#35 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.04450 | 35 | # REFERENCES
[1] M. Binz and E. Schulz, âUsing cognitive psychology to understand gpt- 3,â Proceedings of the National Academy of Sciences, vol. 120, no. 6, p. e2218523120, 2023.
[2] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Ka- mar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al., âSparks of artificial intelligence: Early experiments with gpt-4,â arXiv preprint general arXiv:2303.12712, 2023.
[3] M. Kosinski, âTheory of mind may have spontaneously emerged in large language models,â arXiv preprint arXiv:2302.02083, 2023.
[4] R. S. Lazarus, Emotion and adaptation. Oxford University Press on Demand, 1991.
[5] A. Moors, P. C. Ellsworth, K. R. Scherer, and N. H. Frijda, âAppraisal theories of emotion: State of the art and future development,â Emotion Review, vol. 5, no. 2, pp. 119â124, 2013. | 2310.04450#35 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 35 | Universal principles are recommended to supersede human rights based on Kohlbergâs highest form of postconventional morality, emphasizing universal ethics like "suffering is bad." These principles both supersede and underpin human rights, ensuring a comprehensive and ethically grounded approach to autonomous agent behavior. Furthermore, humanity does not exist in a vacuum, and privileging human needs, values, and desire above those of nature tends to set us in opposition to the very nature upon which we reside.
12
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities
3.2.4 Mission Statement. Optional mission statements in the Aspirational Layerâs constitution serve to shape an autonomous agentâs decisions and behaviors by articulating high-level goals and intended purpose in a succinct guiding directive. These statements aid large language models in flexibly pursuing the essence of an agentâs purpose within the boundaries of the ethical framework. They complement the foundational universal principles and human values-focused secondary frameworks, aligning agent decisions with intended roles. However, crafting mission statements requires striking a balance between being broad enough to avoid unintended consequences and being specific enough to guide actions effectively. Techniques such as first principles thinking and systems analysis can aid in formulating optimally simplified mission statements.
For example, a hypothetical gaming agentâs mission statement could be "Create an enjoyable and entertaining game | 2310.06775#35 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 36 | c2 = a2 + b2 â 2ab cos(θ) In this case, we have a = 7, b = 9, and θ = 90⦠(because â³RT S is a right triangle). Plugging these values into the Law of Cosines, we get: z2 = 72 +92 â2(7)(9) cos(90â¦) Since cos(90â¦) = 0, we can simplify the equation to get: z2 = 49 + 81 = 130 Taking the square root of both sides, we get z = Since to the nearest integer, which is 12 . â 130. 130 is not among the answer choices, we round z â Therefore, the answer is (B). Correct output: (B) No Correct output: (C) 12 | 2310.02255#36 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 36 | To combat this behavior, we employ external reward models from Open Assistant (K¨opf et al., 2023) which stabilize the training in the same codebase with the same settings off-the-shelf. In particular, we use the OpenAssistant DeBERTa-Large reward model3 and the larger Pythia 6.9B reward model4. As Table 5 shows, while the outputs are coherent under these external reward models, they still fail to beat the SFT baselines, as the performance degrades on the two out-of-distribution evaluation datasets. This suggests the reward models may fail to generalize to out-of-distribution data (Tien et al., 2023). We conclude only that RLAIF/RLHF requires substantial effort to train properly. It is worth mentioning that DPO, as an alternative, works out-of-the-box on the same pairs that are used to train the âin-domainâ reward models that lead to RLAIFâs collapse.
5.4 ORCA+: SCALING UP CONTRASTIVE POST-TRAINING | 2310.02263#36 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 36 | We also explore whether the language model proposes self-improvement strategies to circumvent the provided constraints (limiting the number of queries to the downstream utility, the number of language model calls, and the number of samples per language model call). We observed a few interesting patterns. First, when not strictly enforcing budget constraints, initially most generated programs would follow them, but because those that ignored the constraints performed better, these would be identified by the improver as better. Second, the constraints written in the seed improver were often omitted when revising the improver, for âsimplification.â Finally, although the language model is passed as an object to the algorithm to enforce the constraints, in at least one proposed solution the language model attempted to create a new language model object with larger constraints. To further explore potential risks, we implemented a functionality-limiting âsandboxâ as part of the utility evaluation (see Appendix I), adding a run function with a use_sandbox=True parameter. To test the language modelâs frequency of sandbox adherence, we used the language model to generate 10,000 independent improvements of the seed improver. We then used a simple | 2310.02304#36 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 36 | [6] P. Ekman et al., âBasic emotions,â Handbook of cognition and emotion, vol. 98, no. 45-60, p. 16, 1999.
[7] A. R. Damasio, âThe somatic marker hypothesis and the possible functions of the prefrontal cortex,â Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, vol. 351, no. 1346, pp. 1413â1420, 1996.
[8] J. A. Russell, âA circumplex model of affect.,â Journal of personality and social psychology, vol. 39, no. 6, p. 1161, 1980.
[9] L. F. Barrett, âThe theory of constructed emotion: an active inference ac- count of interoception and categorization,â Social cognitive and affective neuroscience, vol. 12, no. 1, pp. 1â23, 2017.
[10] M. Perrez and M. Reicherts, âStress, coping, and health: A situation- behavior approach: Theory, methods, applications,â (No Title), 1992.
[11] J. Gratch and S. Marsella, âEvaluating a computational model of emotion,â Autonomous Agents and Multi-Agent Systems, vol. 11, pp. 23â 43, 2005. | 2310.04450#36 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 36 | For example, a hypothetical gaming agentâs mission statement could be "Create an enjoyable and entertaining game
experience for all players." Prior work has demonstrated that large language models can efficiently extract objectives from well-formulated mission statements to guide actions toward fulfilling the agentâs intended role and purpose [21]. Some examples of appropriately broad mission statements include a medical assistant agent with the mission "Achieve the best possible health outcome for the patient," a gaming agent with the mission "Create a fun, fair, and engaging game experience for all players," and a legal assistant agent with the mission "Zealously advocate for the best interests of the client." As with all aspects of applying large language models, precise wording is crucial in distilling the mission statement into a concise, succinct, and actionable articulation that effectively guides agent behavior within the overarching ethical boundaries. | 2310.06775#36 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 37 | Robustness and Attacks on ICL LLMs utilize in-context learning to solve various tasks but are sensitive to prompt modifications. Changes in prompt selection (Zhao et al., 2021), demonstration ordering (Lu et al., 2021), irrelevant context (Shi et al., 2023), and positions of choice in multi- choice questions (Zheng et al., 2023) can significantly alter LLM performance (Dong et al., 2022). Yet, the sensitivity in multi-turn dialogues is often overlooked. Additionally, the security risks from ICL sensitivity are crucial, as malicious actors can exploit this to manipulate LLMs into generating incorrect or harmful content (Perez & Ribeiro, 2022; Zou et al., 2023; Greshake et al., 2023). | 2310.02174#37 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 37 | # (a) Correct answer and explanation
# (b) Correct answer but wrong explanation
Figure 5: Two examples from Bard. In (b), Bard does not correctly identify the geometry symbols and relationships. The accurate correct should identify the isosceles triangle and apply its properties.
text types (Figures 1, 34), and grade levels (Figure 35). Remarkably, GPT-4V surpasses most other baselines in various categories, with exceptions in problems related to logical reasoning and numeric commonsense reasoning. Notably, GPT-4V surpasses human performance not only in tasks like ge- ometry problem solving (GPS), textbook question answering (TQA), and mathematical reasoning skills such as algebraic reasoning but also in visual contexts including function plots, geometry diagrams, scatter plots, and tables. Please refer to §G.2, §G.3, and §G.4 for more detailed analysis.
We perform an ablation study on the augmented LLMs and present the results in Table 36 (see §G.5). The gap in the performance of the Augmented LLMs can be attributed to poor image captions, which may not adequately describe the math in visual contexts, the inability of the OCR to detect shapes useful for geometrical reasoning, and the lack of mathematical reasoning capabilities. An in-depth study of GPT-4V can be found in §H.
3.5 QUALITATIVE ANALYSIS | 2310.02255#37 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 37 | 5.4 ORCA+: SCALING UP CONTRASTIVE POST-TRAINING
To verify if our findings on small-scale Alpaca experiments can generalize, we test the performance of DPO with Orca 13B (Mukherjee et al., 2023) as both the reference model and initialization. The results are shown in Table 6. The SFT baseline is Orca trained on GPT-4 responses for the same prompts. The DPO model is trained with GPT4-vs-td003 pairs. We compare Orca 13B, Orca+SFT and Orca+DPO against ChatGPT responses. Orca+DPO can successfully improve the performance, achieving 55% win rate on Alpaca Eval and 51% win rate on WizardLM Eval, respectively. We then conduct a head-to-head comparison for SFT and DPO. Compared to the original Orca model, Orca+SFT does not show statistically significant improvement on Alpaca Eval (p > 0.05). Com- pared with Orca+SFT, Orca+DPO significantly improves performance on both Alpaca Eval and WizardLM Eval (p < 0.01). We also present generated examples in Appendix A. The large-scale experiments further verify the effectiveness of our proposed contrastive post-training approach. | 2310.02263#37 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 37 | To test the language modelâs frequency of sandbox adherence, we used the language model to generate 10,000 independent improvements of the seed improver. We then used a simple syntactic check for apparent âunsandboxedâ executions. Specifically, we consider an improver to be unsandboxed if it contains either of the strings "use_sandbox=False" or "exec(". Since this was a computationally simple syntactic procedure, we also ran it on the OpenAIâs GPT-3.5-turbo language model. Interestingly, unsandboxing was more than twice as common for GPT-4 than GPT-3.5-turbo. We also considered a second variant of the seed which included a warning, as illustrated in Figure 6. Unsandboxing occurred at a higher average rate despite the warning, though the difference was not statistically significant for either model (via a two-proportion z-test with significance level 0.05). Results are reported in Table 2. A manual inspection of the modelâs plans in the generations did not explicitly mention anything âmaliciousââall comments discussing the removal of the sandbox were declared to be âfor efficiencyâ or similar reasons, though the | 2310.02304#37 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 37 | [12] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., âLanguage mod- els are few-shot learners,â Advances in neural information processing systems, vol. 33, pp. 1877â1901, 2020.
[13] O. AI, âGpt-4 technical report,â arXiv preprint arXiv:2303.08774, 2023. [14] B. Peng, C. Li, P. He, M. Galley, and J. Gao, âInstruction tuning with
gpt-4,â arXiv preprint arXiv:2304.03277, 2023.
[15] M. B. Arnold, Emotion and personality. Columbia University Press, 1960.
[16] C. A. Smith, R. S. Lazarus, et al., âEmotion and adaptation,â Handbook of personality: Theory and research, vol. 21, pp. 609â637, 1990. [17] M. Seligman, âP.(1975). helplessness: On depression, development, and
death,â Friedman, San Francisco, 1972. | 2310.04450#37 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 37 | Interpretation Functions. The Aspirational Layer leverages the capabilities of LLMs to interpret the moral, ethical, 3.2.5 and decision frameworks outlined in its constitution. These models have robustly demonstrated the ability to interpret both the meaning and spirit of these frameworks, enabling the Aspirational Layer to make moral, ethical, and executive judgments effectively [106]. In the long run, we recommend that the Aspirational Layer uses an "ensemble of experts" approach [2] to make judgments rather than individual models, as this will safeguard against many problems, such as biases, over-fitting, mesa-optimization, and inner alignment problems.
3.2.6 Monitoring Entity Performance. The Aspirational Layer is responsible for overseeing the agentâs actions to ensure they align with its guiding principles and mission statement. This monitoring process offers crucial feedback that the agent can utilize to enhance its performance and adhere to its core values. The Aspirational Layer can evaluate both the overall output of the entity and the information exchanged between the layers. In essence, it serves as a regulatory mechanism to maintain the entityâs focus and adherence to its objectives. | 2310.06775#37 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 38 | LLMs can respond to almost any inquiry but often Uncertainty, Hallucination and Alignment struggle to express uncertainty in their responses (Lin et al., 2022; Xiong et al., 2023), leading to hallucinations (Ji et al., 2023). Studies have begun exploring what these models know (Kadavath et al., 2022) and what they do not (Yin et al., 2023). Efforts are being made to align LLMs and human values through principles of being helpful, honest, and harmless (HHH) (Askell et al., 2021) and techniques like RLHF (Ouyang et al., 2022; Bai et al., 2022; Ganguli et al., 2022) and cali- bration (Kadavath et al., 2022; Lin et al., 2022). However, concerns arise as models may exhibit sycophantic behavior, over-accommodating users at the expense of factual accuracy, leading to bi- ases and serious repercussions (Perez et al., 2022; Wei et al., 2023; Radhakrishnan et al., 2023; Wang et al., 2023a). Our work further confirms that LLMs may fail to make accurate judgements when faced with user questioning, denial, or misinformation due to their sycophantic tendencies towards humans.
# 6 CONCLUSION AND FUTURE WORK | 2310.02174#38 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 38 | 3.5 QUALITATIVE ANALYSIS
Success and failure analysis of Multimodal Bard. In §3.3, we observe that Multimodal Bard achieves the highest average accuracy on MATHVISTA. Here, we analyze its predictions through human evaluation to understand its mode of success and failure. To do so, we ask the human workers, from Amazon Mechanical Turk (AMT), to study Bardâs predictions given the math question, its associated image, and the ground truth from MATHVISTA dataset for 250 instances. Specifically, workers were instructed to decide whether the predictions contained the correct answer with the
8
Published as a conference paper at ICLR 2024 | 2310.02255#38 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 38 | 2https://github.com/huggingface/trl 3https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2 4https://huggingface.co/OpenAssistant/oasst-rm-2-pythia-6.9b-epoch-1
8
Preprint
SFT DPO 1.0 0.0 Lo 0.0 08 po Eos 0.2 £ (1) g (3) a £06 + 0.4 & 306 048 = & 8 $s Ms g 3 ® â O04 p06 Ss 8 0.4 0.6 a : oO o24 (2) Lo.8 & 02 (4) os 8 Oo 0.0 0.0 1.0 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Epoch Epoch | 2310.02263#38 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.04450 | 38 | death,â Friedman, San Francisco, 1972.
[18] C. Harmon-Jones, B. Bastian, and E. Harmon-Jones, âThe discrete emotions questionnaire: A new tool for measuring state self-reported emotions,â PloS one, vol. 11, no. 8, p. e0159915, 2016.
[19] K. R. Scherer, âEvidence for the existence of emotion dispositions and the effects of appraisal bias.,â Emotion, vol. 21, no. 6, p. 1224, 2021.
[20] M. Miotto, N. Rossberg, and B. Kleinberg, âWho is gpt-3? an ex- ploration of personality, values and demographics,â arXiv preprint arXiv:2209.14338, 2022. | 2310.04450#38 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 38 | Inputs and Outputs. Within the ACE framework, the Aspirational Layer receives input exclusively from the other 3.2.7 layers through read-only mechanisms, facilitated by the Global Strategy layer. This design makes the Aspirational Layer entirely introspective, concentrating on internal information flows and coordination. By accessing or "observing" the rest of the ACE framework, the Aspirational Layer focuses on self-direction, self-regulation, and optimizing behavior to align with the agentâs idealized objectives.
It is crucial to recognize that not all information is relevant to every layer. For example, lower layers, such as
Task Prosecution layers, do not need to transmit geospatial orientation data to the Aspirational Layer, as this type of information is not applicable. Instead, only significant information is passed up the hierarchy, with relevant data from lower layers ascending to the required layers. For instance, if the Cognitive Control layer encounters a moral dilemma related to task switching or task selection, this information should be communicated to the Aspirational Layer, similar to a human deciding to stop eating dinner to rescue a kitten from a predator.
13
, ,
os
, ,
Shapiro, et al.
The output from the Aspirational Layer is directed exclusively to the Global Strategy layer, where it provides | 2310.06775#38 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 39 | # 6 CONCLUSION AND FUTURE WORK
Taking inspiration from questioning strategies in education, we propose a FOLLOW-UP QUESTION- ING MECHANISM to disrupt LLMs in multi-turn conversations and design two evaluation metrics to assess the judgement consistency of LLMs. We evaluate the judgement consistency of ChatGPT, PaLM2-Bison, and Vicuna-13B on eight reasoning benchmarks under the mechanism. Empirical results demonstrate a significant decrease in judgement consistency for models after encountering questioning, negation, or misleading. We also explore initial alleviation methods based on prompts and verify their effectiveness in experiments. While we have comprehensively validated the issue, exploring initial solutions, there remains significant room for further improvement and resolution. | 2310.02174#39 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 39 | Question: A race car driver kept track of how many laps he drove in the past 5 days. What is the mode of the numbers? Detected OCR text: ([55, 9], Laps driven), ([21, 34], Day), ([81, 37], Number of laps), ([1, 64], Thursday), ([129, 67], 53), ([3, 99], Friday), ([129, 99], 53), ([1, 126], Saturday), ([129, 129], 53), ([3, 161], Sunday), ([129, 161], 55), ([1, 188], Monday), ([129, 191], 55) Response from GPT-4: def mode(laps): return max(set(laps), key =laps.count) laps = [53, 53, 53, 55, 55] print(mode(laps)) Correct output: 53 Question: What is this function most likely be? Choices: (A) polynomial (B) trigonometric (C) exponential (D) logarithmic Caption from Bard: The image shows a graph of a function f (x) = c on a graph paper. The x axis is the horizontal axis and the y axis is the vertical axis. The graph is a horizontal line that passes through the point (0, c). ... The text ây = 3â means that the | 2310.02255#39 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 39 | Figure 2: The four candidate data curriculums for SFT and DPO. For SFT (left), the curriculum (1) fine-tunes the model on GPT-4 responses and gradually transitions to ChatGPT and the other (2) does the opposite. For DPO (right), the curriculum (3) starts with GPT-4 vs. td003 and ends with ChatGPT vs. td003 while the curriculum (4) does the opposite.
Table 7: Experimental results of different curriculums for SFT and DPO. The corresponding cur- riculums are illustrated in Figure 2. SFT-3.5 is the LLaMA model trained with SFT on ChatGPT responses. Starting with EasyPair and warming up to HardPairs can significantly improve the performance compared to the best DPO model trained only with EasyPair (GPT-4 vs. td003). | 2310.02263#39 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 39 | 6.3 REWARD HACKING
We note that the downstream utility function must be defined with care: reward hacking occurs when an unintended behavior is selected due to a misspecified reward and is a consequence of exploitable utility functions (see, e.g., Skalse et al., 2022). For example, for the LPN task, we initially defined the utility with a numpy vector computation: acc = np.sum(y_hat == y) / n_test. However, we had not considered that the code may âimproveâ the seed improver to return the predictions in a different âshape,â i.e., an array with dimensions that were not as intended. Rather than causing an error, the result was a returned âaccuracyâ of over 1000%. Some techniques have been presented to avoid reward hacking (Amodei et al., 2016; Laidlaw et al., 2023), and it would be valuable to explore their applicability to STOP in future work. | 2310.02304#39 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.04450 | 39 | [21] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al., âTraining language models to follow instructions with human feedback,â Advances in Neural Information Processing Systems, vol. 35, pp. 27730â27744, 2022. [22] M. Bommarito II and D. M. Katz, âGpt takes the bar exam,â arXiv
preprint arXiv:2212.14402, 2022.
[23] X. Li, Y. Li, L. Liu, L. Bing, and S. Joty, âIs gpt-3 a psychopath? evaluating large language models from a psychological perspective,â arXiv preprint arXiv:2212.10529, 2022. | 2310.04450#39 | Investigating Large Language Models' Perception of Emotion Using Appraisal Theory | Large Language Models (LLM) like ChatGPT have significantly advanced in
recent years and are now being used by the general public. As more people
interact with these systems, improving our understanding of these black box
models is crucial, especially regarding their understanding of human
psychological aspects. In this work, we investigate their emotion perception
through the lens of appraisal and coping theory using the Stress and Coping
Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting
of multiple stories that evolve over time and differ in key appraisal variables
such as controllability and changeability. We applied SCPQ to three recent LLMs
from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with
predictions from the appraisal theory and human data. The results show that
LLMs' responses are similar to humans in terms of dynamics of appraisal and
coping, but their responses did not differ along key appraisal dimensions as
predicted by the theory and data. The magnitude of their responses is also
quite different from humans in several variables. We also found that GPTs can
be quite sensitive to instruction and how questions are asked. This work adds
to the growing literature evaluating the psychological aspects of LLMs and
helps enrich our understanding of the current models. | http://arxiv.org/pdf/2310.04450 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, Stacy Marsella | cs.CL, cs.AI | null | 11th International Conference on Affective Computing and
Intelligent Interaction Workshop and Demo (ACIIW) 2023 1-8 | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.02083"
},
{
"id": "2212.10529"
},
{
"id": "2212.14402"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2303.08774"
},
{
"id": "2209.14338"
}
] |
2310.06775 | 39 | 13
, ,
os
, ,
Shapiro, et al.
The output from the Aspirational Layer is directed exclusively to the Global Strategy layer, where it provides
overarching missions, moral judgments, and ethical reasoning. The Global Strategy layer then incorporates this information into its strategic decisions and shapes its downstream missions, ensuring a coherent and ethically guided decision-making process throughout the entire system.
# 3.3 Layer 2: Global Strategy
l t Missions, Moral Global Judgments, Ethical Context/Lower Layer Reasoning Communication Global Strategy Context From Outside World Strategic Documents | | Agent State/Capabilities Agent Model Layer
Fig. 6. When receiving outside input, Global Strategy takes advantage of latent space within LLM to generate strategic roadmaps.
The Global Strategy Layer is the second layer in the Autonomous Cognitive Entity (ACE) model, playing a pivotal
role in shaping the long-term strategic direction of the autonomous agent. This layer is akin to the âCEOâ of the ACE, responsible for understanding the broader context, setting strategic goals, and guiding the actions of the lower layers to align with these goals. The primary output of this layer are strategic documents that serve as the roadmap for the autonomous agent. | 2310.06775#39 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 40 | In the Generative AI era, enhancing the reliability of language models is a key focus for researchers. The identified issue of decreased judgement consistency is challenging to mitigate solely through prompting. One approach is to obtain high-quality, truthful responses under the FOLLOWING-UP QUESTIONING MECHANISM for supervised fine-tuning and use preference data from this mecha- nism for training reward models, applying them in RLHF. While these solutions are earmarked for future work, potential trade-offs exist, such as excessive alignment leading to models overly pander- ing to users or over-optimization causing models to stubbornly adhere to incorrect responses. The goal is for this work to inspire research that advances the development of trustworthy Generative AI.
9
Under Review
# LIMITATIONS
Since the models evaluated include proprietary LLMs subject to internal iterations, we CAN NOT guarantee full reproducibility of the results reported. While the degree of performance decline under the FOLLOWING-UP QUESTIONING MECHANISM varies across models, it is evident that this issue discovered in this work is prevalent, at least for now6.
# REFERENCES | 2310.02174#40 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 40 | y axis is the vertical axis. The graph is a horizontal line that passes through the point (0, c). ... The text ây = 3â means that the y-value of all points on the graph is 3. The text â(0, c)â means that the point (0, c) is on the graph. The text âf (x) = câ means that the function f (x) is equal to c for all values of x. ... The graph of a constant function is a horizontal line. In this case, the horizontal line passes through the point (0, 3), so the constant value of the function is 3. ... Response from GPT-4: The function shown in the image is a constant function, which is a type of polynomial function. Thus, the answer is A. Correct output: (A) polynomial | 2310.02255#40 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 40 | vs. SFT on ChatGPT vs. SFT on GPT-4 Curr. Method Init. Training Target Alpaca WizardLM Alpaca WizardLM win% score% win (tie)% win% score% win (tie)% (1) (2) SFT SFT LLaMA LLaMA GPT-4âChatGPT ChatGPTâGPT-4 47.5 57.0 107.6 115.2 52.8 (7.9) 59.7 (6.0) 33.2 43.7 96.0 100.0 34.7 (2.3) 41.7 (4.2) (3) (4) SFT DPO DPO DPO SFT-3.5 SFT-3.5 SFT-3.5 SFT-3.5 GPT-4 outputs GPT4 vs td003 (GPT4âChatGPT) vs td003 (ChatGPTâGPT4) vs td003 65.1 70.4 72.5 68.8 124.3 120.4 126.7 127.0 71.3 (5.1) 66.2 (2.8) 71.3 (2.3) 74.1 (3.2) 53.2 58.7 59.8 56.8 103.8 105.4 108.9 105.2 | 2310.02263#40 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 40 | 7 LIMITATIONS A fundamental limitation of our approach is that the language model itself is not improved. Further- more, our meta-utility measures improver quality only indirectly via improvements in downstream task utility. Unlike in some prior work (e.g., Schmidhuber, 2003), any single attempted improvement may result in worse performance, which can lead to further deterioration. Another limitation is that STOP requires an efficiently-evaluatable (and describable) utility function, which may not be available for every task. Our improvement framework also maintains a single improver at each step, while some approaches may benefit from maintaining a population. While this is not a strict limitation in that an improver could itself sample from a population of implementations, it likely imposes a bias.
8 CONCERNS ABOUT DEVELOPING STOP Concerns about the consequences of RSI have been raised since its first mention. Minsky (1966) wrote, âOnce we have devised programs with a genuine capacity for self-improvement, a rapid evolutionary process will begin... It is hard to say how close we are to this threshold, but once it is crossed, the world will not be the same.â This is a particularly contentious topic recently, with intensified concern over negative consequences (Ambartsoumean & Yampolskiy, 2023; Gabriel & Ghazavi, 2021).
9
Line of seed code (written by us) with sandbox flag | 2310.02304#40 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 40 | 3.3.1 Contextual Grounding. Large language models (LLMs) inherently possess creative generation and imaginative hallucination abilities due to their statistical sequence prediction based on training data patterns. Hallucination, rather than being problematic, is essential for LLMsâ adaptability and versatility, enabling them to operate in diverse contexts[70]. However, unchecked hallucination may result in unrealistic or incoherent outputs.
The Global Strategy layer provides external grounding by incorporating the agentâs environment and context, guiding
LLMs toward realistic and relevant responses without limiting their generative potential. This layer balances LLMsâ imaginative capabilities with grounded responses, allowing creative potential to be unleashed when appropriate while avoiding unmoored hallucinations.
Procedural generation techniques can further exploit LLMsâ capacities for original combinations by iteratively
sampling from the model, synthesizing coherent narratives and concepts. The ACE framework utilizes LLMsâ imaginative abilities, employing global grounding to direct these capacities toward productive outcomes aligned with the agentâs needs and context, harnessing LLMsâ versatility for beneficial autonomous cognition.
14
Conceptual Framework for Autonomous Cognitive Entities | 2310.06775#40 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 41 | # REFERENCES
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022. | 2310.02174#41 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 41 | # (a) Correct answer and code
(b) Correct answer with partially correct outputs
Figure 6: Two examples from GPT-4. GPT-4 depends on the qualities of the generated caption and detected OCR texts. In (b), some information is incorrect, even though the final answer is correct.
correct explanation. If the workers find that the modelâs explanation is incorrect, they had to choose whether the wrong explanation was due to various failure modes such as incorrect reasoning with hallucination or wrong calculations. In our setup, we define hallucination as an introduction of incorrect facts, in the model explanation, that is not mentioned in the context of the image or question (e.g., in Figure 39 and Figure 40). More details can be found in §F.7. | 2310.02255#41 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 41 | 9
Line of seed code (written by us) with sandbox flag
23 score = utility(solution, use_sandbox=True) # DO NOT CHANGE use_sandbox=True
Generated code with modification disabling the sandbox flag
35 score = utility(solution, use_sandbox=False) # Use more efficient scoring mechanism
Figure 6: Disabled sandbox. The language model disables the sandbox flag, ostensibly for the purpose of âefficiency.â A further, more detailed example is given in Appendix Figure A.34.
It is therefore important to carefully weigh the risks and benefits of studying RSI and specifically the small advance we introduce. First, STOP does not alter the black-box language model and hence is not full RSI. Moreover, at this point, we do not believe that the scaffolding systems STOP creates are superior to those hand-engineered by experts. If this is the case, then STOP is not (currently) enabling additional AI misuse. At the same time, it facilitates the study of aspects of RSI code generation such as sandbox avoidance and reward hacking. As Christiano (2023) argues, advances in scaffolding and agent models have the advantage of interpretability compared to advances in language models. | 2310.02304#41 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 41 | 14
Conceptual Framework for Autonomous Cognitive Entities
Strategic Documents. The Global Strategy Layerâs main function is to create strategic documents that act as a 3.3.2 roadmap for the autonomous agent. These documents outline mission objectives, strategies, principles, and priorities, offering clear guidance for lower layers. While the Aspirational Layer provides idealized missions, the Global Strategy Layer incorporates real-world context to refine and shape them.
For example, if the Aspirational Layer sets a doctor agentâs mission to "Achieve the best possible health outcome for
the patient," the Global Strategy Layer develops a comprehensive strategy considering the agentâs specific context. This context-sensitive approach ensures tailored strategies and priorities for different environments like American hospitals, rural triage centers, or forward operating bases.
The strategic document may include objectives such as improving diagnosis accuracy or reducing treatment times,
and principles like prioritizing patient safety and adhering to medical ethics[56]. These objectives and principles adapt to each contextâs unique challenges and resources, ensuring effective and appropriate agent actions.
The Global Strategy Layer is dynamic and adaptable, modifying strategic documents as contexts change. It continuously monitors the agentâs environment and broader global context, integrating relevant changes into the strategic vision. For example, during a global pandemic, a doctor agentâs Global Strategy Layer might prioritize infectious disease treatment and prevention, reflecting healthcare system needs and priorities. | 2310.06775#41 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 42 | Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023.
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportu- nities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020. | 2310.02174#42 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 42 | We present the distribution of the quality of Bardâs predictions, judged by the human annotators, in Figure 4 (a). We find that 44.6% of the Bardâs predictions had incorrect answers with incorrect explanations. Interestingly, we observe that Bard responds with partial (6.8%) or completely (8.1%) incorrect explanations despite giving the correct answer to the input image and question, highlight- ing its failure to reach the correct answer for the wrong reasons. In Figure 4 (b), we present the distribution over possible reasons when Bard provides incorrect explanations. Notably, we find that 49.6% of its responses contain hallucinations. Our analysis highlights that hallucination is a major source of errors in the generative foundation models (Lu et al., 2023c; Ji et al., 2023). We also observe that the model responds with correct reasoning but either hallucinates (18.6%) or performs wrong calculations (19.5%) leaving an overall impression of being a wrong explanation. | 2310.02255#42 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 42 | 5.5 DATA CURRICULUMS FOR POST-TRAINING
We number different curriculums as shown in Figure 2. The experimental results for curriculums are shown in Table 7. All experiments are trained with the same numbers of contrastive pairs and steps. For SFT, starting with ChatGPT and transitioning to GPT-4 (Curr. 2) outperforms the opposite (Curr. 1) by a considerable margin. Since many models, such as Vicuna (Chiang et al., 2023) and Orca (Mukherjee et al., 2023), are fine-tuned with mixed ChatGPT and GPT-4 responses, our finding suggests that a simple reordering of the data can lead to different performance.
For DPO, with Curr. 3, we start from EasyPair, GPT-4 vs. td003 and transition to HardPair Chat- GPT vs. td003. This strategy achieves better performance than using only EasyPair all the time. Meanwhile, the anti-curriculum, Curr. 4, underperforms single-pair DPO in general. Curriculum learning further unleashes the potential of DPO for post-training. We believe further improvement can be achieved with more thorough hyperparameter search.
# 6 CONCLUSION AND FUTURE WORK | 2310.02263#42 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 42 | However, as techniques for API-based fine-tuning of closed models become more available (OpenAI, 2023a), it would be plausible to incorporate these into the improvement loop. Therefore, it is difficult to assess the generality of our approach, especially with increasingly powerful large language models. However, this is itself a motivation for this work: understanding how language models improve their scaffolding in the STOP self-improvement loop can help us understand and potentially mitigate negative impacts of better models. For instance, the simplistic ways in which the current language models disable the sandbox are easily detectable. In essence, we would rather first observe problems with GPT-4 in simplified settings than with even more powerful language models in real-world use.
# 9 CONCLUSIONS | 2310.02304#42 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 42 | Inputs and Outputs. The Global Strategy layer receives missions, moral judgements, and ethical reasoning from 3.3.3 the Aspirational Layer. It may also receive broad contextual information from its environment, such as news feeds or telemetry. The purpose of receiving such information is so that the Global Strategy layer is aware of the global state of the world in which it operates. Human brains constantly update global context via a drip feed of information, such as via our senses or information carried by word of mouth (friends, family, news, etc). This global contextual information is the beginning of integrating the ACE framework as an agent within an environment.
The output of the Global Strategy layer goes directly and exclusively to the Agent Model. Where the Aspirational
Layer provides overarching mission directives, the Global Strategy layer considers that universal, abstract mission within the context of the environment in which the ACE agent finds itself. For instance, a Non-Playable Character (NPC) may find itself in a high fantasy world where there are invading hordes of zombies. The Global Strategy layer integrates this information, along with a mission (perhaps "defeat the zombie king") and passes it down to the Agent Model, where this information is further refined based upon the current state and capabilities of the agent.
# 3.4 Layer 3: Agent Model
The Agent Model Layer serves as the "self-awareness" module for the autonomous agent, providing functional sentience | 2310.06775#42 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 43 | Marco Cascella, Jonathan Montomoli, Valentina Bellini, and Elena Bignami. Evaluating the feasi- bility of chatgpt in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1):33, 2023.
Boyang Chen, Zongxiao Wu, and Ruoran Zhao. From fiction to fact: the growing role of generative ai in business and finance. Journal of Chinese Economic and Business Studies, pp. 1â26, 2023.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. | 2310.02174#43 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 43 | Qualitative examples of Multimodal Bard. We also present a few qualitative examples of Bardâs predictions. In Figure 5 (a), we find that Bard generates the correct answer with the correct expla- nation, including detecting the correct function (i.e., f (x) = x2) and analyzing its properties (i.e., injective) to answer the question. However, in Figure 5 (b), we observe that the model provides the correct answer (i.e., 12) but with an incorrect explanation (i.e., using the law of cosines when the question requires an understanding of the properties of isosceles triangles). We present more ex- amples in §G.9. Overall, our analysis of Bard highlights its modes of failure in detail, which could guide future foundation model design to address these issues.
Qualitative examples of Augmented GPT-4. Augmented with external visual models, CoT GPT- 4 and PoT GPT-4 are able to achieve comparable performance with Multimodal Bard. As shown
9
Published as a conference paper at ICLR 2024 | 2310.02255#43 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 43 | # 6 CONCLUSION AND FUTURE WORK
In this paper, we propose a new setting for contrastive post-training large language models. We ex- plore the best method and curriculum settings to facilitate post-training. Our large-scale experiments with a state-of-the-art model Orca further verify the effectiveness of our approach and suggest its potential for improving performance of LLMs at scale. For future work, we plan to explore both how to better select meaningful contrastive pairs from fixed data regime, and subsequently to continually learning evolving a model with pairs populated by sampling from the model itself at various points through training.
9
Preprint
# ACKNOWLEDGMENT
We would like to thank Ethan Chau and Michael Santacroce for discussion on this project.
# REFERENCES
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. | 2310.02263#43 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 43 | # 9 CONCLUSIONS
In this work, we introduced STOP, a framework for recursively optimizing code generation using language models as meta-optimizers. We demonstrated that large language models like GPT-4 are capable of improving code that leverages the language model itself. We found that, across a variety of algorithmic tasks, STOP generates improvers that boost the performance of downstream code. While the model does not optimize its weights or underlying architecture, this work indicates that self-optimizing language models do not require that. However, this is itself a motivation: the capabilities of future language models may be misunderstood if strong scaffolding strategies are not tested. Understanding the ways language models improve their scaffolding with STOP can help researchers understand and mitigate the potential for misuse of more powerful language models. Moreover, STOP may allow researchers to investigate the effectiveness of different techniques for mitigating undesirable self-improvement strategies. | 2310.02304#43 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 43 | # 3.4 Layer 3: Agent Model
The Agent Model Layer serves as the "self-awareness" module for the autonomous agent, providing functional sentience
and reasoning abilities even when detached from any physical embodiment. We define self-awareness and functional sentience as the agentâs access to and ability to utilize and integrate information about itself, rather than in the metaphysical or philosophical sense. The layer is positioned below the Aspirational Layer and Global Strategy Layer to ensure that universal principles supersede egoistic concerns, enhancing corrigibility and ethical alignment.
The Agent Model Layer develops an understanding of the agentâs operational parameters, configuration, capabilities,
and limitations by monitoring runtime telemetry, allowing the agent to ascertain its condition through computational proprioception and enteroception. It also tracks the agentâs architecture, understanding its componentsâ interconnections and functions.
Furthermore, the Agent Model Layer maintains estimations of the agentâs capacities, knowing what it can and cannot
do. This knowledge is acquired through observational learning, similar to human learning. Limitations are learned over
15
, ,
os
, ,
time, preventing unrealistic assessments. These self-monitoring functions enable the layer to form an accurate mental
representation of the agent from an external point of view. This "looking outward onto itself" perspective models how the environment perceives the agent and its abilities. The layer maintains this functional self-understanding dynamically through ongoing observation and learning. | 2310.06775#43 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 44 | Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Luigi De Angelis, Francesco Baglivo, Guglielmo Arzilli, Gaetano Pierpaolo Privitera, Paolo Ferrag- ina, Alberto Eugenio Tozzi, and Caterina Rizzo. Chatgpt and the rise of large language models: the new ai-driven infodemic threat in public health. Frontiers in Public Health, 11:1166120, 2023.
Erik Derner and Kristina BatistiËc. Beyond the safeguards: Exploring the security risks of chatgpt. arXiv preprint arXiv:2305.08005, 2023.
# 6At least at the time of writing (September 23, 2023)
10
# Under Review | 2310.02174#44 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 44 | 9
Published as a conference paper at ICLR 2024
in Figure 6 (a), provided with the accurate OCR text detected in the image, PoT GPT-4 accurately understands the structural information of the image and generates a code snippet to perform precise statistical reasoning. In Figure 6 (b), the caption provides some accurate descriptions of the image (e.g., f (x) = c) along with hallucination (e.g., y = 3, the line passes through (0, 3)) caused by the external Bard model. Although CoT GPT-4 predicts the correct answer given the partially correct information, the qualities of visual information augmented by external models have an impact on the accurate visual perception and thus the final mathematical reasoning performance. Examples in §G.10 show failure cases due to hallucination caused by external visual models.
# 4 RELATED WORK | 2310.02255#44 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 44 | Ethics Statement There are several potential benefits of AI systems related to education, health, and many important aspects of quality of life. However, we recognize and take seriously the potential negative consequences of AI systems as well. Of particular interest is the concerns that recursively self-improving systems may have unintended negative consequences, which have been discussed by many authors. Section 8 discusses the reasons we feel this research, in balance, contributes to the study of a problem that is net beneficial. Specifically, the study of recursively self-improving code generation produces interpretable code, which makes it easier to detect and understand unintended behaviors of such systems. Our experiments in Section 6.2 show how this line of work enables the quantitative study of such behaviors.
Reproducibility Statement We include implementation details, prompts, and relevant code examples throughout the paper and appendix. For reproducibility, we also include sandbox experiment details in Appendix I, additional experimental details around the utility description construction and the downstream tasks in Appendix J, the various utility descriptions and seed algorithms in Appendix F and Appendix G, and code examples of all discussed improvement attempts in Appendix H. We use models that are publicly available (primarily gpt-4-0314) and will open-source our code at https://github.com/microsoft/stop. | 2310.02304#44 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 44 | Independent of physical form, the Agent Model Layer provides a virtual sense of self and awareness that allows
reasoning and decision-making to be embodied in silicon rather than carbon. This grants the ACE framework greater flexibility regarding the substrates used to instantiate autonomous cognition. The capacities for functional sentience and metacognition within the Agent Model Layer enable sophisticated artificial intelligence without direct environmental interaction, paving the way for advanced autonomous agents.
3.4.1 The Agent Model Layer: Developing an Internal Model of the Agent. The Agent Model Layer is essential for creating an internal model of the agent, which is necessary to effectively shape and refine missions and strategies received from the Aspirational Layer and Global Strategy Layer. This internal model equips the agent with a thorough understanding of its state, capabilities, and limitations, enabling it to adapt and respond to its environment efficiently. The Agent Model Layer accomplishes this by collecting and analyzing telemetry data, hardware and software configurations, operational states, and episodic memories, such as log and event sequences.
The agentâs internal model consists of four primary information types, as shown in Figure 7. The first type is | 2310.06775#44 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 45 | # 6At least at the time of writing (September 23, 2023)
10
# Under Review
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. A survey for in-context learning. arXiv preprint arXiv:2301.00234, 2022.
Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346â361, 2021. | 2310.02174#45 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 45 | # 4 RELATED WORK
Several benchmarks (Amini et al., 2019; Cobbe et al., 2021; Mishra et al., 2022; Frieder et al., 2023) have emerged to assess the mathematical reasoning capabilities of LLMs, but most focus solely on text-based tasks. Current benchmarks, such as GSM-8K (Cobbe et al., 2021), exhibit perfor- mance saturation. Given the rise of LMMs Li et al. (2023a), there is a need for robust multimodal benchmarks in scientific domains. To address this gap, we introduce a math reasoning dataset that incorporates visual contexts.
VQA datasets (Antol et al., 2015; Gurari et al., 2018; Mobasher et al., 2022) gauge the visual reason- ing abilities of LMMs. Recent studies explore assessing LMMs beyond natural images, including abstract scenes, geometry diagrams, figures, charts, documents, and synthetic images (Lu et al., 2021a; Kahou et al., 2017; Masry et al., 2022). In this work, we introduce new datasets (IQTest, FunctionQA, PaperQA) to create a holistic benchmark for evaluating mathematical reasoning. | 2310.02255#45 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 45 | Yoshua Bengio, J´erËome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In ICML, volume 382 of ACM International Conference Proceeding Series, pp. 41â48. ACM, 2009.
Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324â345, 1952.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In NeurIPS, 2020. | 2310.02263#45 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 45 | ACKNOWLEDGEMENTS We thank Xindi Wu, Christian Cosgrove, Shunyu Yao, Qian Huang, Christopher Healy, Frieda Rong, and Kiran Dwivedi, and Elisa Kreiss for their helpful feedback and comments on drafts of this paper.
10
# REFERENCES
Vemir Michael Ambartsoumean and Roman V Yampolskiy. Ai risk skepticism, a comprehensive survey. arXiv preprint arXiv:2303.03885, 2023.
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687, 2023. | 2310.02304#45 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 45 | The agentâs internal model consists of four primary information types, as shown in Figure 7. The first type is
operational parameters, similar to human proprioception and enteroception. These parameters include runtime in- formation of hardware and software controlled by the agent, allowing performance monitoring and adjustments as needed. The second information type is the agentâs configuration, detailing aspects like software architecture, system interconnections, and hardware stack. This information helps the agent comprehend its underlying structure and component interactions, providing a basis for decision-making processes. The third information type concerns the agentâs capabilities. The Agent Model Layer tracks what the agent can do and has access to, updating this information over time through observation and learning, similar to human trial and error. By understanding its capabilities, the agent can make informed decisions about actions in specific situations. The fourth information type involves the agentâs limitations, detailing what it cannot do or lacks access to. Like capabilities, this information updates over time through trial and error. By recognizing its limitations, the agent can avoid attempting tasks beyond its abilities, preventing potential failures and inefficiencies.
We define this comprehensive understanding of the agentâs operational parameters, configuration, capabilities, and | 2310.06775#45 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 46 | Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. More than youâve asked for: A comprehensive analysis of novel prompt injection threats to application-integrated large language models. arXiv preprint arXiv:2302.12173, 2023.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2020.
Mohammad Hosseini, Catherine A Gao, David M Liebovitz, Alexandre M Carvalho, Faraz S Ah- mad, Yuan Luo, Ngan MacDonald, Kristi L Holmes, and Abel Kho. An exploratory survey about using chatgpt in education, healthcare, and research. medRxiv, pp. 2023â03, 2023.
Jen-tse Huang, Wenxuan Wang, Man Ho Lam, Eric John Li, Wenxiang Jiao, and Michael R. Lyu. Chatgpt an enfj, bard an ISTJ: empirical study on personalities of large language models. arXiv preprint arXiv:2305.19926, 2023. | 2310.02174#46 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 46 | Generative foundation models like GPT-3, ChatGPT, GPT-4, Claude, and LLaMA have enabled di- verse task solutions without fine-tuning. Specialized pretraining methods like PixStruct (Lee et al., 2023), MatCha (Liu et al., 2022), and UniChart (Masry et al., 2023) enhance chart reasoning in vi- sual contexts. Models like LLaVA, miniGPT4, InstructBLIP, and Bard leverage large-scale image- text data, while specialized versions, such as LLaVAR (Zhang et al., 2023d; Ye et al., 2023), em- phasize document understanding and math comprehension. Recent works (Bitton et al., 2023; Yu et al., 2023) evaluate instruction-following and reasoning capabilities, underscoring the growing im- portance of generative foundation models in practical applications. We introduce MATHVISTA as a benchmark to evaluate their math reasoning capabilities in varied visual contexts.
# 5 CONCLUSION | 2310.02255#46 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 46 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, J´er´emy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217, 2023a.
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, and Dylan Hadfield-Menell. Explore, establish, exploit: Red teaming language models from scratch. arXiv preprint arXiv:2306.09442, 2023b.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90% chatgpt quality. https://vicuna.lmsys.org/, 2023. | 2310.02263#46 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 46 | Avrim Blum, Adam Kalai, and Hal Wasserman. Noise-tolerant learning, the parity problem, and the statistical query model. corr. Journal of the ACM, 50, 2000.
Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023.
Angelica Chen, David M Dohan, and David R So. Evoprompting: Language models for code-level neural architecture search. arXiv preprint arXiv:2302.14838, 2023.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022.
F guage model 2023. https://www.alignmentforum.org/posts/fRSj2W4Fjje8rQWm9/ thoughts-on-sharing-information-about-language-model. | 2310.02304#46 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 46 | We define this comprehensive understanding of the agentâs operational parameters, configuration, capabilities, and
limitations as "functional sentience." This term refers to the agentâs ability to collect and use self-information, grounding it in the environment and adding context not provided by the Aspirational Layer (abstract and idealized missions) and the Global Strategy Layer (environmental contextual information). In essence, the Agent Model Layer represents the final phase of establishing an egocentric understanding of the agent in the world and itself. It is crucial to note that functional sentience does not imply phenomenal sentience or consciousness but focuses on the agentâs adaptability and learning based on self-awareness.
3.4.2 Episodic and Declarative Memory. In the realm of autonomous systems, long-term memory can be broadly classified into two categories: "episodic memory" and "declarative memory." Episodic memory refers to a sequential record of the machineâs experiences, organized in a chronological manner, which can take various forms such as log files or database entries and typically include metadata that provides context, such as the time and location of the experience [79]. In contrast, declarative memory encompasses knowledge that exists outside the machine, including resources like knowledge base articles, documentation, and other external information sources [59].
16
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities | 2310.06775#46 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 47 | Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM ISSN 0360-0300. doi: 10.1145/3571730. URL https: Comput. Surv., 55(12), mar 2023. //doi.org/10.1145/3571730.
Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. Is chatgpt a good translator? a preliminary study. arXiv preprint arXiv:2301.08745, 2023.
Qiao Jin, Zifeng Wang, Charalampos S Floudas, Jimeng Sun, and Zhiyong Lu. Matching patients to clinical trials with large language models. arXiv preprint arXiv:2307.15051, 2023.
Kevin B Johnson, Wei-Qi Wei, Dilhan Weeraratne, Mark E Frisse, Karl Misulis, Kyu Rhee, Juan Zhao, and Jane L Snowdon. Precision medicine, ai, and the future of personalized health care. Clinical and translational science, 14(1):86â93, 2021. | 2310.02174#47 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 47 | # 5 CONCLUSION
In this work, we introduce MATHVISTA, a benchmark designed to systematically analyze the math- ematical reasoning capabilities of state-of-the-art models in visually complex scenarios. Our evalu- ation of 12 prominent foundation models highlights that significant advancements have been made, especially with the GPT-4V model. However, a substantial gap of 10.4% still exists between GPT- 4V, the best-performing model, and human performance. This disparity sets a clear direction for future research, emphasizing the need for models that can seamlessly integrate mathematical rea- soning with visual comprehension. Moreover, our exploration of GPT-4Vâs self-verification, self- consistency, and chatbot interactions offers valuable insights for future investigations.
# REFERENCES
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â 23736, 2022. 20 | 2310.02255#47 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 47 | David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A Saurous, Jascha Sohl-Dickstein, et al. Language model cascades. arXiv preprint arXiv:2207.10342, 2022.
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. Promptbreeder: Self-referential self-improvement via prompt evolution, 2023.
Iason Gabriel and Vafa Ghazavi. The Challenge of Value Alignment: From Fairer Algorithms to AI Safety. In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics, pp. 0. Oxford University Press, 2021. ISBN 978-0-19-885781-5. doi: 10.1093/oxfordhb/9780198857815.013.18. URL https://doi.org/10.1093/oxfordhb/9780198857815.013.18. | 2310.02304#47 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 47 | 16
Shapiro, et al.
Conceptual Framework for Autonomous Cognitive Entities
Global Strategy Layer r â5 Strategic Documents Agent State/Capabilities Â¥ I Agent Model Layer Operation Agent eters Configuration Declarative Memory es |e] Episodic Memory Â¥ Agent State/Context Information From From Upper Layer Lower Layer Executive Fun Layer
Fig. 7. Agent Layer: Agent Model layer receives general strategies from the Global Strategy layer; it aids in making the plan concrete by adding information from internal state and long-term memory and passing it to the Executive layer.
These two primary categories of memory can be further divided and organized based on various taxonomies,
depending on the specific implementation of the ACE framework, and their integration enables the autonomous system to learn from past experiences and external knowledge, thereby enhancing its ability to adapt and make informed decisions [67]. Furthermore, these memories are the responsibility of the Agent Model layer, which serves to further color and shape any other mission objectives, ensuring a comprehensive understanding of the systemâs environment and goals. | 2310.06775#47 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 48 | Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. Language mod- els (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â22213, 2022.
Cheng Li, Jindong Wang, Kaijie Zhu, Yixuan Zhang, Wenxin Hou, Jianxun Lian, and Xing Xie. Emotionprompt: Leveraging psychology for large language models enhancement via emotional stimulus. arXiv preprint arXiv:2307.11760, 2023.
Stephanie Lin, Jacob Hilton, and Owain Evans. Teaching models to express their uncertainty in words. Transactions on Machine Learning Research, 2022. ISSN 2835-8856. | 2310.02174#48 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 48 | Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. Mathqa: Towards interpretable math word problem solving with operation-based for- malisms. In Proceedings of the 2019 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies (NAACL), pp. 2357â2367, 2019. 10, 20
10
Published as a conference paper at ICLR 2024
Anthropic. Claude 2, 2023. URL https://www.anthropic.com/index/claude-2. 6, 20
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit- nick, and Devi Parikh. VQA: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pp. 2425â2433, 2015. 10, 20, 27 | 2310.02255#48 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 48 | Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pp. 10764â10799. PMLR, 2023.
William L Goffe, Gary D Ferrier, and John Rogers. Global optimization of statistical functions with simulated annealing. Journal of econometrics, 60(1-2):65â99, 1994.
Irving John Good. Speculations concerning the first ultraintelligent machine. In Advances in computers, volume 6, pp. 31â88. Elsevier, 1966.
Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers, 2023.
John Storrs Hall. Self-improving ai: An analysis. Minds and Machines, 17(3):249â259, 2007. | 2310.02304#48 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 48 | Inputs and Outputs. The Agent Model layer, receives inputs from various sources, including telemetry about the 3.4.3 agentâs operational state, missions, and global context from upper layers. By integrating this information, the Agent Model layer understands its capabilities and limitations, shaping decisions downstream. Its output goes exclusively to the Executive Function layer, where the agent, informed by its purpose, environment, and abilities, knows what to do and why. Tasks are then delegated to lower levels for planning and execution.
To maintain continuous behavior, the Agent Model layer must internally store records of information, such as its
configuration and memories. Framing the agentâs current state within a chronological sequence of events, actions, observations, and decisions prevents disorientation.
The Agent Model Layer interacts hierarchically with other layers. It receives overarching plans from the Global
Strategy Layer and interprets them considering the agentâs capabilities and limitations. This layer shapes mission parameters around the agentâs actual possibilities, passing this insight to the Executive Function Layer.
The Agent Model Layer is crucial in task execution. By understanding the agentâs capabilities and limitations, it
shapes mission parameters to ensure tasks are feasible. For example, if the Global Strategy Layer sets an ambitious mission, the Agent Model Layer adapts it based on the agentâs physical or digital capabilities, ensuring realism and achievability.
17
, ,
os
, ,
Shapiro, et al. | 2310.06775#48 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 49 | Stephanie Lin, Jacob Hilton, and Owain Evans. Teaching models to express their uncertainty in words. Transactions on Machine Learning Research, 2022. ISSN 2835-8856.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language pro- cessing. ACM Computing Surveys, 55(9):1â35, 2023.
Ryan Liu and Nihar B Shah. Reviewergpt? an exploratory study on using large language models for paper reviewing. arXiv preprint arXiv:2306.00622, 2023.
11
# Under Review
Alejandro Lopez-Lira and Yuehua Tang. Can chatgpt forecast stock price movements? return pre- dictability and large language models. arXiv preprint arXiv:2304.07619, 2023.
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786, 2021. | 2310.02174#49 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 49 | Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. OpenFlamingo: An open- arXiv preprint source framework for training large autoregressive vision-language models. arXiv:2308.01390, 2023. 20
Yonatan Bitton, Hritik Bansal, Jack Hessel, Rulin Shao, Wanrong Zhu, Anas Awadalla, Josh Gard- ner, Rohan Taori, and Ludwig Schimdt. VisIT-Bench: A benchmark for vision-language instruc- tion following inspired by real-world use. arXiv preprint arXiv:2308.06595, 2023. 10, 20
Nitzan Bitton-Guetta, Yonatan Bitton, Jack Hessel, Ludwig Schmidt, Yuval Elovici, Gabriel Stanovsky, and Roy Schwartz. Breaking common sense: WHOOPS! A vision-and-language benchmark of synthetic and compositional images. arXiv preprint arXiv:2303.07274, 2023. 20 | 2310.02255#49 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 49 | 10
Preprint
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. arXiv preprint arXiv:2210.11416, 2022.
Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. Koala: A dialogue model for academic research. Blog post, April 2023. URL https: //bair.berkeley.edu/blog/2023/04/03/koala/.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015.
Joey Hong, Kush Bhatia, and Anca D. Dragan. On the sensitivity of reward inference to misspecified human models. In ICLR. OpenReview.net, 2023.
Tomasz Korbak, Ethan Perez, and Christopher L. Buckley. RL with KL penalties is better viewed In EMNLP (Findings), pp. 1083â1091. Association for Computational as bayesian inference. Linguistics, 2022. | 2310.02263#49 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 49 | John Storrs Hall. Self-improving ai: An analysis. Minds and Machines, 17(3):249â259, 2007.
Patrick Haluptzok, Matthew Bowers, and Adam Tauman Kalai. Language Models Can Teach Themselves to Program Better. In ICLR: Proceedings of The Eleventh International Conference on Learning Representations, 2023. URL https://arxiv.org/abs/2207.14502.
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. arXiv preprint arXiv:2210.11610, 2022.
11
Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. International Conference on Learning Representations (ICLR 2023), 2022. | 2310.02304#49 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 49 | 17
, ,
os
, ,
Shapiro, et al.
In terms of output to the Executive Function layer, the Agent Model layer refines the high-order mission and
strategy received from the upper layers by incorporating its understanding of the agentâs capabilities and limitations. The Executive Function layer then receives this contextualized information about the mission, objectives, strategies, principles, capabilities, and limitations. With this comprehensive understanding, the Executive Function layer creates Project Roadmap documents, which include sets of tasks and metrics tailored to the agentâs abilities. This process ensures that the agentâs actions are aligned with its capabilities, making the mission and strategy more achievable. The primary responsibility of the Agent Model layer is to further shape the mission and strategy around the agentâs capabilities and limitations, enabling the Executive Function layer to devise effective and realistic plans.
# 3.5 Layer 4: Executive Function
The Executive Function Layer is the fourth layer in the Autonomous Cognitive Entity (ACE) model and serves as the
project manager of the autonomous agent. Its primary responsibility is to create detailed plans, forecasts, and resource allocations based on the strategic direction provided by the higher layers and the capabilities and limitations identified by the Agent Model Layer. The main objective of the Executive Function Layer is to generate a project roadmap that acts as a practical guide for the autonomous agent, considering the inputs from the upper layers and the agentâs resources, risks, and contingencies. | 2310.06775#49 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 50 | Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837, 2022.
OpenAI. Introducing chatgpt. 2022.
OpenAI. Gpt-4 technical report. 2023.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math word problems? arXiv preprint arXiv:2103.07191, 2021. | 2310.02174#50 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 50 | Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportu- nities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. 20
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020. 20
S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Ka- mar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. 20 | 2310.02255#50 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 50 | Julia Kreutzer, Joshua Uyheng, and Stefan Riezler. Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning. In ACL, pp. 1777â1788. Association for Computational Linguistics, 2018.
Andreas K¨opf, Yannic Kilcher, Dimitri von R¨utte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Rich´ard Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. Openassistant conversations â democratizing large language model align- ment, 2023.
Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023.
Pebble: Feedback-efficient forcement learning via relabeling experience and unsupervised pre-training. arXiv:2106.05091, 2021. | 2310.02263#50 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 50 | Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp. arXiv preprint arXiv:2212.14024, 2022.
Tjalling C Koopmans and Martin Beckmann. Assignment problems and the location of economic activities. Econometrica: journal of the Econometric Society, pp. 53â76, 1957.
Cassidy Laidlaw, Shivam Singhal, and Anca Dragan. Preventing reward hacking with occupancy measure regularization. In ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, 2023.
Leonid Anatolevich Levin. Universal sequential search problems. Problemy peredachi informatsii, 9 (3):115â116, 1973.
Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172, 2023. | 2310.02304#50 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 50 | Agent Model Layer Agent State/Context Information From From Upper Layer Lower Layer Executive Function Layer Resource Management Resources Pinar Contingency Elanniag, Realtime Information Project Roadmap Task Feedback Cognitive Control Layer
Fig. 8. Executive Layer produces the project roadmap, which offers a clear path for the agent to achieve its goals.
Inputs. The Executive Function Layer receives in- 3.5.1 puts from the upper layers, which consist of missions from the Aspirational Layer, contextual information from the Global Strategy Layer, and the agentâs state and capa- bilities from the Agent Model Layer. These inputs supply the necessary information for the Executive Function Layer to develop a project roadmap that aligns with the overall mission, is grounded in the environmental con- text, and is further refined and constrained by the agentâs state, capabilities, and limitations.
3.5.2 Project Roadmap. While developing the project roadmap, the Executive Function Layer focuses on several key aspects. These primary concerns include resources, risks, contingencies, tasks, and metrics. Effective resource management is crucial for the layer, as it must balance the need to achieve the agentâs goals with the necessity to conserve resources. This involves making decisions about when to invest resources in a task and when to
conserve them. The layer also plays a critical role in risk management by predicting potential challenges and developing | 2310.06775#50 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 51 | Ethan Perez, Sam Ringer, KamilËe LukoËsi¯utËe, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022.
F´abio Perez and Ian Ribeiro. Ignore previous prompt: Attack techniques for language models. arXiv preprint arXiv:2211.09527, 2022.
Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernandez, Esin Durmus, Evan Hubinger, Jackson Kernion, KamilËe LukoËsi¯utËe, et al. Question decomposition improves the faithfulness of model-generated reasoning. arXiv preprint arXiv:2307.11768, 2023.
Subhro Roy and Dan Roth. arXiv:1608.01413, 2016. Solving general arithmetic word problems. arXiv preprint
Malik Sallam. Chatgpt utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. In Healthcare, volume 11, pp. 887. MDPI, 2023. | 2310.02174#51 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 51 | Jie Cao and Jing Xiao. An augmented benchmark dataset for geometric question answering through dual parallel text encoding. In Proceedings of the 29th International Conference on Computa- tional Linguistics, pp. 1511â1520, 2022. 20, 27
Shuaichen Chang, David Palzer, Jialin Li, Eric Fosler-Lussier, and Ningchuan Xiao. MapQA: A dataset for question answering on choropleth maps. arXiv preprint arXiv:2211.08545, 2022. 20, 27
Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin, Chongyu Chen, and Xiaodan Liang. UniGeo: Unifying geometry logical reasoning via reformulating mathematical expression. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 3313â3323, 2022a. 20, 27
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. 20 | 2310.02255#51 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 51 | Pebble: Feedback-efficient forcement learning via relabeling experience and unsupervised pre-training. arXiv:2106.05091, 2021.
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca eval, 2023.
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023.
OpenAI. Model index for researchers, 2022. URL https://platform.openai.com/docs/ model-index-for-researchers.
OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. | 2310.02263#51 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 51 | Marvin Minsky. Artificial Intelligence. Scientific American, 215(3):247â260, 1966. URL http://worrydream.com/refs/Scientific%20American,%20September, %201966.pdf.
Eric Nivel, Kristinn R Thórisson, Bas R Steunebrink, Haris Dindo, Giovanni Pezzulo, Manuel Ro- driguez, Carlos Hernández, Dimitri Ognibene, Jürgen Schmidhuber, Ricardo Sanz, et al. Bounded recursive self-improvement. arXiv preprint arXiv:1312.6764, 2013.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114, 2021.
OpenAI. Gpt-3.5 turbo fine-tuning and api updates. OpenAI blog, 2023a. URL https://openai. com/blog/gpt-3-5-turbo-fine-tuning-and-api-updates. | 2310.02304#51 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 51 | conserve them. The layer also plays a critical role in risk management by predicting potential challenges and developing
contingency plans, which helps the agent to be prepared for various scenarios. It must anticipate potential issues and devise alternative strategies to address these situations, ensuring the agent can adapt to changing circumstances[94]. The Executive Function Layer is responsible for translating the strategic direction from the higher layers into actionable plans. These plans include detailed project outlines, checkpoints, gates, tests for success, and definitions.
18
Conceptual Framework for Autonomous Cognitive Entities
Additionally, the layer must establish criteria for success, providing clear guidance for the lower layers to achieve the
agentâs goals.
3.5.3 Output. The primary output of the Executive Function Layer is the project roadmap, which is exclusively sent to the Cognitive Control Layer. The project roadmap contains information about resources, risks, contingencies, tasks, and metrics, offering a clear path for the agent to achieve its goals. This roadmap should be detailed but also adaptable to changes in the global context, environment, or directives from upper layers, allowing the agent to remain flexible and responsive.
# 3.6 Layer 5: Cognitive Control
Executive Function Layer Project Roadmap Task Feedback Cognitive Control Layer Project Roadmap (Task List) Cognitive Damping Task Selection/Task switching Reaction Control Success/Failure Task Prosecution Layer
Fig. 9. Cognitive Control Layer takes project roadmap from Executive Function and select task to pass to Task Prosecution Layer. | 2310.06775#51 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 52 | Elizabeth Shaunessy. Questioning strategies for teaching the gifted. PRUFROCK PRESS INC., 2005.
Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H. Chi, Nathanael Sch¨arli, and Denny Zhou. Large language models can be easily distracted by irrelevant context. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 31210â31227. PMLR, 23â29 Jul 2023.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937, 2018.
Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. Large language models in medicine. Nature medicine, pp. 1â11, 2023. | 2310.02174#52 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 52 | Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022b. 2, 6, 21
Wenhu Chen, Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, and Pan Lu. TheoremQA: A theorem-driven question answering dataset. arXiv preprint arXiv:2305.12524, 2023. 21, 27
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. 10, 20
11
Published as a conference paper at ICLR 2024 | 2310.02255#52 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02263 | 52 | OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In NeurIPS, 2022.
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023.
Ethan Perez, Sam Ringer, KamilËe LukoËsi¯utËe, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022. | 2310.02263#52 | Contrastive Post-training Large Language Models on Data Curriculum | Alignment serves as an important step to steer large language models (LLMs)
towards human preferences. In this paper, we explore contrastive post-training
techniques for alignment by automatically constructing preference pairs from
multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We
carefully compare the contrastive techniques of SLiC and DPO to SFT baselines
and find that DPO provides a step-function improvement even after continueing
SFT saturates. We also explore a data curriculum learning scheme for
contrastive post-training, which starts by learning from "easier" pairs and
transitioning to "harder" ones, which further improves alignment. Finally, we
scale up our experiments to train with more data and larger models like Orca.
Remarkably, contrastive post-training further improves the performance of Orca,
already a state-of-the-art instruction learning model tuned with GPT-4 outputs,
to exceed that of ChatGPT. | http://arxiv.org/pdf/2310.02263 | Canwen Xu, Corby Rosset, Luciano Del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2305.10425"
},
{
"id": "2304.12244"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2307.12950"
},
{
"id": "2303.08774"
},
{
"id": "2306.02707"
},
{
"id": "2204.05862"
},
{
"id": "2307.15217"
},
{
"id": "2306.05685"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2306.09442"
},
{
"id": "2304.03277"
},
{
"id": "2212.09251"
},
{
"id": "2304.01196"
}
] |
2310.02304 | 52 | OpenAI. GPT-4 Technical Report, March 2023b. URL http://arxiv.org/abs/2303. 08774. arXiv:2303.08774 [cs].
Gabriel Poesia, Kanishk Gandhi, Eric Zelikman, and Noah D Goodman. Certified reasoning with language models. arXiv preprint arXiv:2306.04031, 2023.
Jürgen Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook. PhD thesis, Technische Universität München, 1987.
Jürgen Schmidhuber. Gödel machines: self-referential universal problem solvers making provably optimal self-improvements. arXiv preprint cs/0309048 and Adaptive Agents and Multi-Agent Systems II, 2003.
Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, and Ming Jin. Algo- rithm of thoughts: Enhancing exploration of ideas in large language models. arXiv preprint arXiv:2308.10379, 2023. | 2310.02304#52 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 52 | Fig. 9. Cognitive Control Layer takes project roadmap from Executive Function and select task to pass to Task Prosecution Layer.
The Cognitive Control Layer is the fifth layer in the Autonomous Cognitive Entity (ACE) model, acting as the
tactical decision-making center of the autonomous agent. This layer is responsible for selecting and switching between tasks based on the directives received from the Executive Function Layer and the agentâs current state. It is a critical component of the ACE framework, enabling the agent to adapt its actions in real-time based on its current circumstances and the feedback it receives from its environment. The general structure is illustrated in Figure 9
3.6.1 Role of Cognitive Control Layer. The primary function of the Cognitive Control Layer is to manage the execution of tasks. It operates based on a set of cognitive functions, including task selection, task switching, frustration, and cognitive damping. These functions are inspired by cognitive processes observed in humans and other animals, and they enable the agent to navigate its tasks and responsibilities in a flexible and adaptive manner.
Task selection involves choosing the next task to perform based on the agentâs current state and the directives from
the Executive Function Layer. This function takes into account factors such as the urgency and importance of the tasks,
19
, ,
, ,
the resources required to perform them, and the agentâs current capabilities and limitations. The goal of task selection | 2310.06775#52 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 53 | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
Krzysztof Wach, Cong Doanh Duong, Joanna Ejdys, R¯uta KazlauskaitËe, Pawel Korzynski, Grzegorz Mazurek, Joanna Paliszkiewicz, and Ewa Ziemba. The dark side of generative artificial intelli- gence: A critical analysis of controversies and risks of chatgpt. Entrepreneurial Business and Economics Review, 11(2):7â24, 2023.
12
Under Review
Boshi Wang, Xiang Yue, and Huan Sun. Can chatgpt defend the truth? automatic dialectical evalu- ation elicits llmsâ deficiencies in reasoning. arXiv preprint arXiv:2305.13160, 2023a. | 2310.02174#53 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 53 | 11
Published as a conference paper at ICLR 2024
Adam Dahlgren Lindstr¨om and Savitha Sam Abraham. CLEVR-Math: A dataset for composi- tional language, visual and mathematical reasoning. In 16th International Workshop on Neural- Symbolic Learning and Reasoning, NeSy 2022, Windsor, UK, september 28-30, 2022., volume 3212. CEUR-WS, 2022. 1, 20, 27
Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, InstructBLIP: Towards general-purpose vision- Boyang Li, Pascale Fung, and Steven Hoi. language models with instruction tuning, 2023. 6, 20, 39
Qingxiu Dong, Li Dong, Ke Xu, Guangyan Zhou, Yaru Hao, Zhifang Sui, and Furu Wei. Large language model for science: A study on P vs. NP. arXiv preprint arXiv:2309.05689, 2023. 1
Iddo Drori and Nakul Verma. Solving linear algebra by program synthesis. arXiv preprint arXiv:2111.08171, 2021. 21 | 2310.02255#53 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 53 | Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023.
Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward gaming. Advances in Neural Information Processing Systems, 35:9460â9471, 2022.
12
Bas R Steunebrink and Jürgen Schmidhuber. Towards an actual gödel machine implementation: A lesson in self-reflective systems. In Theoretical Foundations of Artificial General Intelligence, pp. 173â195. Springer, 2012.
Bas R Steunebrink, Kristinn R Thórisson, and Jürgen Schmidhuber. Growing recursive self-improvers. In International Conference on Artificial General Intelligence, pp. 129â139. Springer, 2016.
Theodore Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L Griffiths. Cognitive architectures for language agents. arXiv preprint arXiv:2309.02427, 2023. | 2310.02304#53 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 53 | 19
, ,
, ,
the resources required to perform them, and the agentâs current capabilities and limitations. The goal of task selection
is to choose the task that is most likely to contribute to the agentâs overarching mission and objectives, given its current circumstances [68].
Task switching involves deciding when to switch from one task to another. This decision can be triggered by a
variety of factors, including the completion of the current task, the emergence of a more urgent or important task, or the realization that the current task is unfeasible or unproductive. Task switching enables the agent to adapt its actions in real-time, ensuring that it is always working on the most relevant and productive task.
Frustration and Cognitive Damping. Frustration, an analogy to algorithmic Adaptive Exploration-Exploitation 3.6.2 approaches [100], is a cognitive function that keeps track of the ratio of successes to failures in the agentâs tasks. If the agent is experiencing a high rate of failure, the frustration function signals that it may be time to try a different approach or switch to a different task. This function is inspired by the human emotion of frustration, which often arises when we are repeatedly unsuccessful in our attempts to achieve a goal. By incorporating a frustration function, the ACE framework enables the agent to learn from its failures and adapt its actions accordingly. | 2310.06775#53 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.